A Deep Dive into Artificial Intelligence

A Deep Dive into Artificial Intelligence:

What is Artificial Intelligence (AI)? According to NASA, AI “refers to computer systems that can perform complex tasks normally done by human-reason, decision making, creating, etc.” NASA states that there is “no single, simple definition” regarding AI and that is because it is changing and growing constantly. 

As I speak with people on the topic, I tend to receive two responses: one of fear and one of reckless abandonment. There are those who are extremely concerned about AI and what it will do to us as human beings. Then, there are those who can’t wait to open Pandora’s Box and see all the wonderful benefits waiting to be used. 

In the little research I have done about AI, I have discovered that, in general, there are three fundamental components of all AI Systems. There is Data, which is how a system learns and makes decisions. Without large quantities of data, there are no decisions. There are Algorithms. These are sets of rules systems use to process these large quantities of data. Then, there is Computing Power. AI systems need computing resources to process these large quantities of data through their complex algorithms. As you can imagine, there are needs for large quantities of power to run these AI systems.

As far as the history of AI, the groundwork for the idea began in the early 1900s, but the largest advances are recent. Alan Turing began exploring artificial networks in the 1950s; he published a paper entitled, Computer Machinery and Intelligence, in which he proposed a test of machine intelligence. He called this test the Imitation Game, which eventually became the Turing Test. This was a watershed moment as AL technology began to develop rapidly after this point.

Computer development began with increasing processing speeds in the 70s and 80s, producing faster, cheaper more accessible computers. During this time, the very first AI language was created, but computers were still too weak to demonstrate any kind of intelligence. The 80s were a time of growth and of increased interest in AI, and this was due, in part, to breakthroughs in research, which increased funding opportunities. The 90s produced the first functioning AI systems: the first AI to defeat a world champion chess player, AI robots, AI self-cleaning vacuums, and AI speech recognition software. In the late 1990s and 2000s, there were significant advances in AI. Automation and machine learning were used to solve problems in academia as well as in the real world, which brings us today.

There are AI systems all around us and their use continues to increase daily. Ai is used in law, medicine, education, engineering, science and more. There are enormous benefits to its use. It can solve problems and diagnosis diseases, but like anything else, with the benefits come the detriments. There are detriments, even though I have spoken to several who see none. I have my own concerns, but for today I will just address one: entropy. 

AI systems are created with entropy in mind, but it is the entropy found in thermodynamics. The second law of thermodynamics states that the entropy of an isolated system can only increase or remain constant; it never decreases. As great as AI is, it is still a created system, and it still must deal with entropy. I tend to look at entropy and its relationship with AI from the perspective of physics, which indicates that the tendency of systems is to move towards a greater state of disorder and randomness and not away from it. If I am right to look at entropy’s relationship with AI this way, what does it say about AI’s future? Is it endless? Is it immune from entropy? 

As AI becomes more a part of its own data, and by data, I mean that content which it creates that is added to the data to which it has access, what will happen to its state of entropy? Will it decrease or increase? I believe it will most certainly increase. I do see, in the distant future, an ancestral relationship with its data when its data base moves past a 50% bifurcation point. What I mean by this is that at some point in the future I see AI creating so much data that become part of the data base ( the internet) that it begins to use its own created data to make decisions. Will this matter?

I do think this will matter. What will it do to its ability to think and reason? Here is a harder question, will it be too late? What I mean by that question is will it be too late for us at this future date due to our conditioning and dependence on AI systems? If, at a future date, this ancestral state of entropy is reached and it results in AI systems suddenly providing some false information or some untrue truth, will we be able to recognize this information as false or will we be too far gone? There are hard questions not being discussed regarding AI that need to be discussed. Will we take the time to discuss them or are we too in a hurry to usher in AI as the solution to all our problems. When that day comes, AI will be the least of our worries. Until next time …  

Deconstructing Deconstructivism: Part IV

Deconstructing Deconstructivism: Part IV

            We have examined deconstructivism, but there is one question that remains: how are we to respond to it? I would suggest that any response begins by, first, coming to an understanding of instability as it is defined by deconstructivism. There are questions that come with any speculation of instability as an organic state of language. What if its organic state was, instead, something else? What if instability was merely the tension of determination? Derrida provided some support for this type thinking when he referred to the openness of instability as “aporia,” describing it as a puzzle or quandary (Jackson & Massei, 2012). What if “aporia” was that which was imposed over meaning by deconstructivism? Nothing is for certain and should not be thought of in those terms.  

Let’s begin this post, instead, with some hard truth; instability is a part of life, but it is a part of life we fight against. No one wants to be unstable, even when it comes to language. We seek clarity not confusion, especially in our communication. What do we do with confusion? We look for ways to clarify it and eliminate it, and yet deconstructivism seems, to me, to seek to keep it. I see it seeking to be the means of clarity. Do I dare go further? I think it goes beyond clarity and seeks to be “the” means of meaning. What other purpose would it have for keeping instability alive, especially in language? Let’s back up a bit and look at what instability does, if left on its own. Simply, it destroys stability. As we have studied deconstructivism, we have referenced its interaction with norms. Instability undermines stability, especially when it comes to stable norms. Derrida advocated that an important part of the process of deconstructivism was to keep asking questions, which is a theoretical device used to keep meaning and language from falling into a sameness, which is never seen as a critical tool of analysis or as positive. Sameness is never welcomed in critical analysis and always viewed with suspicion and as bias. 

Derrida saw both language and thought as living in what he called binary opposition, which he suggested was a confirmation of the instability of language. He saw language relying on opposing concepts like good/evil, true/false and happy/sad to sustain itself. He did not see these as part of the natural state of language but as constructions imposed on meaning and language by human beings. What is language if not a tool of communication for human beings? I do not see language as entity unto itself; I see it strictly as a tool used by human beings to communicate. In my reading of Derrida, I did not get the sense that he saw language in the same light as I see it. He saw these binary oppositions as existing within a dangerous possibility: that one term would be given the privileged status over the other term, thus affecting the natural state and balance of language. He claimed that this privileged status (one term over the other) prevented meaning from “disseminating out beyond its initial intended meaning” in multiple directions, which assumes language is not a tool but an entity unto itself. One question I have regarding binary oppositions is this one: do they not define each other? Is not dark the absence of light? Is not false the answer that is not the right answer? What is the alternative if these binary oppositions are removed? I don’t see them as constructions of human beings but instead, as observations of human beings. Human beings did not create dark or truth or even good. They observed its presence or its absence through, in many cases, its binary opposite.

When it comes to communication, I do not seek to protect two binary opposed meanings, at least not when I am seeking to be clear in my communication. Communication, for me, is determining shared meaning for the purposes of effective and clear communication. It is understanding meaning and embracing the same meaning. Did Derrida see language impacted by context or was he afraid of the impact of context? I am not sure. Derrida claimed to have seen language and thought as indecideable (his word), a term he used to describe meaning as having no clear resolution, which, from my perspective, leaves language in one place … in a state of confusion, which could also be referenced as instability. Is this what he saw or is this what he needed language to be for deconstructivism to grow and thrive? How we see and respond to deconstructivism will do one of two things: it will either feed it or starve it and kill it. 

Deconstructivism if often referenced with terms like unpacking, destabilizing and undermining in regard to its interaction with norms, which it would define those that are stable as assumptions, as binaries and as privileged. These are intentionally negative terms designed, in my opinion, for them to be unpacked or destabilized. But, again, what if the theory of deconstructivism is wrong when it comes to norms? What if instability is not a natural state but instead, one created for the purposes of destabilizing those norms that are stable? If this is the case, then we would need to confirm through a dialectic method whether deconstructivism is viable or not. When it comes to literary theory, deconstructivism operates in literary theory by encouraging us to read literature closely but with skepticism, questioning binary oppositions, resisting final interpretations and embracing ambiguity. When we put all these words together—skepticism, questioning, resisting and ambiguity—what do we get? These words encourage doubt, challenge authority and embrace uncertainty, which could be summed up in one word, instability. The question then becomes does deconstructivism identify instability or produce it? 

Considering this question, I think we must, first, understand deconstructivism for what it is. I am not advocating that it produces instability, but I am say that there does exist a possibility that it does. Therefore, we cannot assume that it does, nor can we assume that it does not. It is important to understand that any disagreement with its principles—its skepticism of fixed meanings, rejection of absolute truth and tendency towards destabilizing established frameworks—if not done critically and constructively will be engaging it in the very manner being criticized and result in confusion or ambiguity, which is exactly what deconstructivism wants and, in many ways, needs. In this series, I have tried to provide a picture of this theoretical position from different angles for the purpose of understanding. Ignorance is offering criticism of that which we do not understand without understanding; analysis is offering constructing critical analysis in a thoughtful, respectful and knowledgeable manner. Back to our question, how do we respond to deconstructivism?

Let’s begin by seeking to understand what we believe and subjecting our own beliefs to the same analysis to confirm whether our beliefs are true or not. So many of us are unwilling to do that but must be willing to do that if we are seeking truth. We must, next, understand that our perceptions, as right and as true as they feel, are only our perceptions. They are not reality or even true, at times. Sometimes they are true and other times they are not. Most of the time they are built and re-enforced by someone else’s perceptions, which should be analyzed as well. For example, I have been advocating in this series in subtle ways that one of the weaknesses of deconstructivism is its lack of focus on the pragmatic reality of communication. To communicate, we need “shared linguistic and cultural frameworks,” and my example of that is language. English speakers do not communicate well in other parts of the world if they are monolingual or unwilling to engage in the language of the region in some way. If they expect everyone to speak English and have a very superficial view of communication, then they will struggle to communicate because they are allowed instability to reign and seek no action to clarify. There are other aspects of communication like culture, attitude, countenance and a willingness to engage and communicate. If none of these are engaged, communication will be lacking and remain ambiguous and confused. That sounds nothing like the state of communication needed to effectively communicate, and yet, that is a practical example, albeit simple, of deconstructivism at its simplest level.   

As we engage deconstructivism, and you will engage it, it will be helpful to you to recognize it. How will you do that? Let’s start with its tendency to blur all distinctions. Not only will it seek to destabilize stable norms, but it will blur clear distinctions which tend to lead to relativism, which is another sign of the presence of deconstructivism. Where do we see this? Right now, the most prominent place we are seeing this is in the blurring of the genders, male and female. This is clear indication of the presence and the impact of deconstructivism, but it is also an opportunity to address deconstructivism’s weakness when it comes to practicality and real-world applications. While there is a blurring of the genders (per deconstructivism) there is not a blurring of the product of this burring, which is contradictory and an opportunity to determine its validity that we need not miss. Again, our response depends on our ability to identify the presence and impact of deconstructivism and then respond respectfully and lovingly to it inside its own theoretical methodology. This means we must understand it, something most of us are unwilling to do. It is helpful and intelligent to read and study both sides of an issue. As difficult as this is to do, to really understand and respond well, we must do this. Another tendency of deconstructivism is its push towards ambiguity, which is not applicable in several vocational situations, especially in areas like medicine and engineering. We should not blindly and emotionally reject deconstructivism outright because of these two examples but use them by applying them back on top of deconstructivism as a means of pointing out weaknesses, gaps and breakdowns and asking questions.

Deconstructivism is a critical theory that is used in academics effectively in micro-situations, but its struggles, like most academic theory, begin when it is applied in culture in real-world macro-situations or used to push an agenda and change behavior. Any theory, good or bad, if applied in similar situations, will produce similar results. We should respond as civilized respectful human beings with a critical eye towards its application in wrong settings to learn more about it and use it to pursue truth. In the right settings, it is effective in rooting out bad theory and paving the way for good theory, but in the wrong settings, it quickly becomes a hammer akin to propaganda used by those with malicious intent to inflict their ideas on others via power, and that is not considered ethical nor critical analysis.  This concludes this series on deconstructivism. I hope you enjoyed it. Until next time …     

Derrida, Jacques. (1988). “Derrida and difference.” (David Wood & Robert Bernaconi, Trans.). Evanston, IL: Northwestern University Press. (Original work published 1982).

Epistemology: Knowledge, Understanding or Both

Epistemology: Knowledge, Understanding or Both

Have you ever said, “I do not understand?” I am sure you have, but have you ever thought about what it means to understand? It seems so basic a concept that everyone should understand what it means to understand, but do we? Do we understand in the same way as we used to understand? Is understanding someone the same as understanding something? This post explores understanding through the lens of philosophy.  

It is fascinating to read that this concept of understanding, in philosophy, has been “sometimes prominent, sometimes neglected and sometimes viewed with suspicion,” as referenced in the Stanford Encyclopedia of Philosophy (SEP), which was my main resource for this post (Grimm, 2024). As it turns out, understanding, or as it is known in philosophical circles, epistemology, differs depending on time frame. Who knew? 

Let me start with the word “epistemology,” which was formed from the Greek word episteme, which, for centuries, was translated as knowledge, but in the last several decades “a case has been made that ‘understanding’ is the better translation” (Grimm, 2024). This is due, in part, to a change in the semantics of the word “knowledge.” That change was prompted by a shift towards observation as the primary means of obtaining knowledge, which is not so much a change in understanding as it is in the semantics of knowledge. But, should that change how we define understanding?

The SEP references theorist Julia Annas, who notes that “episteme [is] a systematic understanding of things” as opposed to merely being in possession of various bits of truth. We can know (knowledge) what molecular biology is, but that does not mean that we understand molecular biology. There is a clear difference between knowing something and understanding something, or at least there used to be. Both Plato and Aristotle, according to the SEP, considered “episteme” as an “exceptionally high-grade epistemic accomplishment”. They both viewed episteme as both knowing and understanding. The Greeks and most of the Ancients valued this dual idea of understanding and yet, according to the SEP, subtle changes in the semantics of the word took place over time, moving the semantics of episteme from knowing and understanding to just knowing, which, in my opinion, allowed observation a more prominent role regarding understanding. The question is, did observation improve our understanding of understanding? 

There are many theories on why this shift in the semantics of understanding occurred, but it did occur. My concerns do not center on the “why”, but instead, they center on the impact of this shift on present understanding. The idea of understanding went through a period in the past where its overall importance diminished and was replaced by the idea of theorizing, which is not understanding but speculation. According to the SEP, theorists throughout history have proposed various theories about understanding, and most theories did two things: they pulled us away from the original idea of understanding and pushed us towards a focus on self. It was self that was understanding’s biggest threat in the past and it is self that continues to be its biggest threat presently.

When I read that understanding was neglected in the past, I struggled to make sense of why it was neglected. Who would not want to understand? It was only when I understood that, at the time, understanding was thought to be primarily subjective and psychological, with a focus more on an understanding that was familiar, that it made more sense to me.  Familiarity is the idea of being closely acquainted with something or someone. Regarding familiarity’s impact on understanding, it pushed it towards self and away from the dual idea of knowledge and understanding. This push mutated understanding into what equates to an opinion, making it foundationally subjective, that is, until it bumped into science. In the world of science, understanding, or as it is often referenced, epistemology, was forced to move away from subjectivity and towards objectivity to interact with positivism, which was foundationally dominate in science until recently. 

According to the SEP, the notion of a subjective understanding inside epistemology was, rightfully, downplayed in the philosophy of science due, in part, to the efforts of Carl Hempel (Grimm, 2024). Hempel and others were suspicious of this “subjective sense” of understanding and its interaction with science. According to Hempel, “the goodness of an explanation” had, at best, a weak connection to understanding, especially regarding real understanding. Hempel’s point was that a good explanation might produce understanding but then again, it might not but it would still be familiar and seem like understanding. That was not objective, which was needed in science. The work of Henk de Regt made a distinction between the feeling of understanding and real understanding. He argued that “the feeling is neither necessary nor sufficient for genuine understanding.” His point, which seems straightforward, was that real understanding had little to do with feeling. Feeling is not scientific nor is it objective. It is always rooted in self, which is not understanding. 

Understanding is thought to be a deep knowledge of how things work and an ability to communicate that knowledge to others. This presented a question: what is real understanding? According to the SEP, there are multiple positions regarding this one question. It is interesting to note the presence of “luck” in positions of understanding, with one position asserting understanding as akin to full blown luck (the fully externally lucky position). This is where I defer from the SEP and dismiss the idea of luck altogether. These positions assert, in subtle ways, understanding as a pragmatic product-oriented method; all that seems to matter is that you understand, which, by all indications, would not be true for true understanding. True understanding is being able to explain to others in detail the understanding you understand. The fully external lucky position is rather pragmatic and contrary to this idea of understanding. It seems to stop at one’s understanding and does not consider that to truly understand, one must be able to pass on the understanding one understands to another. 

The contrasting position argues that one needs to understand in the “right fashion” in the right manner to understand again, and for me, the word “again” is key. In other words, understanding, to be considered as understanding, always needs to be replicated in a way that can be communicated to others so that they understand, and to do that one must understand the process every time and not just one time. The first position, for me, violates the duality of understanding and knowledge. This is important because, for me, it is the duality that completes understanding. To understand a concept, one must know what the concept is and understand how it works. The first position, the fully externally lucky position, blends knowledge and understanding into something that loses the semantics of both, pushing understanding into a pragmatic area where understanding becomes almost tangible, discounting the process in favor of it as product. This is not understanding but a lower form of knowledge. True understanding is always a process that explains how the product became, how the product works and how the products is applied. 

There are those who argue that understanding does tolerate “certain kinds of luck.” These philosophers hold positions that understanding can be “partly externally lucky.” Is it me or does luck have no place in understanding? If luck has any place in understanding, then that understanding is not understanding but a stumbled upon form of knowledge. No one stumbles onto a medical degree nor the knowledge needed for it. Most would not equate this as the proper application of their position, but understanding builds on itself, and if it does that, then, this application is not as stretched as it would seem. I believe the idea of understanding goes beyond the discussion in this post. It is an esteemed element of our humanity. It is who we are as human beings, and a large part of what makes us a human being.  

There are those—and the number grows daily—who no longer value understanding nor want to spend energy doing it. They consider it an antiquated process and no longer needed because we have technology, specifically, we have AI to do all our understanding for us, right? But do we? Does AI help us understand or does it only provide explanations? Are explanations understanding or are they something else? I believe understanding is distinctly human. I believe it is how we interact and build community. Maybe we don’t need to understand chemistry (I think there will always be a need to understand chemistry and everything else.), but we will always need to understand each other because we all are different. 

If we no longer strive to understand the things that we do not know, how will we ever understand anything or anyone? Will we even want to understand in the future if we no longer seek to understand in the present? Will we become conditioned to enjoy being isolated and introverted? That seems sad and not human. This idea of understanding is much more complex than most realize. The issue is not just one of episteme but one of humanity, at least to me it is. Think long and hard about understanding because once you lose it recovering it will not be easy. Thanks for reading! Until next time …   

Grimm, Stephen, “Understanding”, The Stanford Encyclopedia of Philosophy(Winter 2024 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/win2024/entries/understanding/&gt;.

How Do We Know What Is Real?

How do we know what is real?

I took a trip back to where I was raised to visit family and friends. It was a wonderful trip but quick and too short, but that is sometimes life. It was good for my soul and even better for my mind. I loved all the conversations I had. I loved listening to how others arrived at their own points of view. Some of us still hold the same values and have adapted to life in some of the same ways. Others hold different values and have adapted to life in different ways. Why? One of the subjects that came up was reality and how many different versions of reality are out there now. As I was driving back home, a question came into my mind—how do we know what is real? —and I could not shake it. 

My standard practice when I get one of these questions is to go poking around those people I respect, read or follow and see what they think. In my latest search, I stumbled upon a reference to an article with an interesting title, so I looked it up and read it. The article was in Psychology Today, which, for me, is not one of my usual references, but the title was too inviting. The article, “How Do We Know What Is Real?” By Ralph Lewis, M.D., was well worth my time and maybe worth yours too. Before I get into the article, let me set some foundational timbers for this post.   

First, let’s be clear; we experience the world through our five senses; that is given. Second, it is best to experience the world with all five of our senses. Most agree on that point as well. It is the way most of us live and we give it little thought. We just do it. Point three: Most theorists would call this experience subjective and question its reliability, but Lewis points out that “subjective perception” is still a crucial source of data for almost everyone. We rely on it every day as we live our lives. Consider science, even its practices and methods incorporate senses, i.e., observation, which is technically considered subjective and yet still a foundationally part of the scientific method. Dr. Lewis writes, “Science is just a method to minimize the distorting effects of our perceptions and intuitions and to approximate a more objective view of reality.” This is intuition and it is and should be greatly valued. You use it and so do I. It is the primary focus of this post. Most professionals use it. They depend on their own “trained” intuition to do their job. Doctors, financial advisors, plumbers, teachers, engineers and many others, all use trained intuition to excel in their vocations. 

But here is the issue I want to focus on; trained intuition is not universal absolute truth nor is it reality. It is a form of discernment that allows us to problem solve. It is assumption and inference developed through our education and training that works with who we are to solve issues. It is also based on our ideology which is a composite of our beliefs and values. This makes it uniquely ours, and it tends to work only for us. But this means that we often see our intuition and as reality. In some respects, it is, but it is not ultimate reality for us. The more success we experience the more egocentric we become, and this puts us in a position to think our reality is everyone’s reality. It never is. Your doctor may have an intuition about why you are sick, but that is the result of his or her interaction with you and your issue. At best, it is a temporary situational reality that works for your current situation, but that is as far as it can go. As Lewis states, “But it [intuition] can be completely off base” and lead even experts astray.” Lewis continues, “We have to be aware that our intuitions and firmly held assumptions may be completely wrong.” This leads me to a question. Where does intuition lie? The answer is the brain. 

The brain is a “well-honed but imperfect virtual reality machine,” according to Lewis. We don’t have a brain; we are a brain. Our brains produce subjective perceptions which are representations of our external world—our very own form of virtual reality. According to Lewis, we can be confident that most of the time these subjective perceptions that our brains produce are faithful representations of our actual external world. Social cues are just one example of our brains making a subjective perception. In most instances, we are right, but I think we have all experienced a time or two when we were wrong. 

Our brains, according to Lewis, rely on patterns, approximations, assumptions and best guesses. Our brains often take shortcuts, fill gaps and make predictions and all of these things are based upon our intuition which flows from those subjective perceptions. Lewis is clear; subjective perceptions are real, but they are not what they seem, even to those of us who own them. The brain is a “confederation of independent modules,” all working together. Lewis writes regarding this, “The vastly complex unconscious neuronal determinants that give rise to our choices and actions are unknowable to us.” 

The brain just works, and it works well due to the subjectivity of our experiences, but, as real as they seem, they are not reality for us, and they cannot be reality for us. The more successful we are the more our tendency will be to think that our reality is everyone else’s reality, which, again, is when we get in trouble. When we push our intuition as if it is reality, then we will think it is reality. When this happens, we merge our intuition with our existing ideology, and they become one. We will always find others who share and reinforce our ideology, then it is our ideology that becomes our reality. This tends to isolate us inside our ideology which becomes our ultimate reality. This is the Land of Oz and not reality at all. This is where real issues arise in the form of narcissism and nihilism.

Lewis goes into mystical experiences and hallucinatory or dissociative experiences to make his point. He posits that these experiences seem so real to those who have them that they believe that they have discovered a transcendental reality. They have not discovered an alternative reality. They have merely experienced the power of a chemical or drug or the power of suggestion. The brain thrives because of subjectivity, but that subjectivity makes it vulnerable to external influences like drugs and persuasion. We would be naive to assume that our subjective perception of the world was anything but that, and yet this is where many are today. There is no longer a concern about doing the right thing, working hard, having integrity, honor or even telling the truth. The only concern right now is for self … to be right. We are in a war of opinions, and everyone is armed with their own editorial comments. The battles wage because the winners get to declare what is true, until the next battle comes, and then, the cycle starts all over again. This is our world today and determining what is real is no longer determining what is true. Our elections have revealed that, have they not? How do we know what is real? I think the better question might be, do we care about what is real? Until we do, we will never determine what is real.

The Rise and Fall of Western Civilization: Part II

Part II: Western Civilization and Christianity

We, in the West, love our freedom, our liberty and all the choices afforded to us. We love free speech, the right to an education and class movement. We vote, are free to be critical and free to believe different things, if we choose. All of these “rights” are ours, at least we believe that they are ours. There is just one minor problem: these rights we claim as ours are found only in the West and nowhere else. They are not really ours but on loan to us from the West, which begs the question, why, then, is the West in decline?

I ended my last post suggesting a connection between Christianity and the West, and in this post, I am going to defend that suggestion. Whether you believe in its truths or not, and you are free to do both in the West, there is no denying the impact Christianity has made, not only on the West, but on the entire world. Many of the beliefs, the values and the traditions we hold dear came to us in the West and from the West, and most of those came to us, like it or not, from Christianity.  

When Christianity entered the world, it came into a world that was a mixture of Roman, Greek and Jew. There were three, Rome, Athens and Jerusalem, major civilizations in the world at the time, all trying to conquer each other, but it was Christianity, according to Gregg, that did the conquering. It made the Jewish God of Abraham available to both the Romans and the Greeks while also appropriating and transforming much of the Jewish thinking into a synthesis of reason and revelation. It was Christianity that changed the world by granting rights to those who had never had rights and introducing change that applied to all people. These ideas morphed into what is known as the West and “Western” thought today. According to Gregg, all of it came out of Judaism through Christianity. It was Christianity that introduced three major ideas that were new and radical; it was these three ideas that contributed to the development of this distinct “Western” culture and “Western” mindset. 

These three ideas were distinct to Christianity and, as we shall see, versions of them were foundational to Western Civilization. First, reason was viewed as divine, which suggested that the world was created by a Holy God and had order and purpose. Second, there was the idea that all human beings had reason and could employ it with assistance in redeemable ways to know truth, including the moral truth of a Holy God. And third, this Christian revolution started by Jesus Christ emphasized a new form of freedom that the world had never seen before. It was a freedom that unfettered all human beings from rulers and their power and provided them a means to a Holy God and to their own betterment. These three ideas changed the entire world and forms of them took root and became foundational to the West as we know it today. 

Those three Christian ideas that changed the world have morphed into three tenets of Western Civilization that we assume to be our own natural rights. They have become a bit distorted over time, but they are still very much alive and active today in the West. We assume they are distinct to the West and products of the West when their geneses are rooted in Christianity. We don’t’ think about them. They are ours, and we assume that they will always be ours because we possess them and have always possessed them. They are part of our normal, our worldview and our paideia if you will. These three ideas are distinct to Western Civilization, and yet they are even more distinct to Christianity, although they are better known by their Western nomenclature. What are they? Well, you will recognize them because they are you and me. These three ideas are three rights we take for granted and call our own; they are the right to an education, the right to a democratic way of life and the right to personal freedom. Each one came to us, not from the West, but from Christianity and along with a host of other “norms” now residing in the West.  

Those three ideas created a revolution of sorts that changed the entire world. They gave everyone power that had only been reserved for kings and queens of old. They put totalitarian regimes and those like them on notice, offering something else, a better form of government, and as much as we want to, we cannot ignore their connection to Christianity. It was Christianity that was affirmed as “the true philosophy” by Clement of Alexandria and lauded for its “integration of faith and reason.” It was Christianity that produced churches, hospitals and schools, including the university, which was founded for the training of the church’s clergy and for the pursuit of truth for the sake of truth. This one product (the university) was a statement on the change that Christianity brought to the world. You can find the university in almost every country in the world today and in its vision, you will find a pursuit of wisdom and knowledge. This educational pursuit was a pursuit that the world had never had the liberty, the ability nor the desire to pursue until this Christian revolution, and all of it was rooted in a belief and in a conviction that there is a Creator God who created a world of order that could be known. Today, our colleges and universities have all but forgotten this connection, but they owe their very existence to Christianity.  

There is wonderful book entitled, The Dying of the Light, that traces the origins of all colleges and universities in the United States. The striking point about this book is that almost every college or university in this country was originally the product of a denomination … the product of some form of Christianity. The Congregationalists, the Baptists, the Presbyterians, the Lutherans, and the Catholics … every denomination that created a colleges or university is in that book and almost every single college or university that was created in this country is in that book. The point not to miss is that these institutions of higher learning were created, in part, due to a mandate from a Holy God and a conviction that this God created a world of order that could be known and should be known, which prompted a curiosity and a desire to learn more about this God and the world he created. One author put it this way, “It [Christianity] launched an age that saw the world as characterized by order, that the human mind can comprehend and a world that merits study simply because it is the world of God.” So, when we talk about the West, in most instances, we are taking about Christianity and its impact on the world. 

It is the West that ushered in the study of science, mathematics and medicine. It is the West that employed democracy in real time and presented it as a better more copious option. It is the West that concerned itself with poverty, slavery and racism, albeit imperfectly. No other country, people group, religion or mindset offered anything close to what the West has offered to the world. It is the West that has taken its advances and advanced itself for better or worse, and while it has had its share of issues, indulgences and mistakes, it has still provided the world with so much. This is Western Civilization and the Western mindset all rolled up into this innocuous phrase we use without a second thought, and today, we find it close to death. Why? In my next post, I begin to examine its fall and death. Until then …  

The Rise and Fall of Western Civilization

Part I: The Development of the West

I recently read an article about the decline and fall of the West, which produced a single thought in my mind in response to this article … Are we living through what many are calling the decline of the West or has the West already fallen? These two questions produced more thoughts and prompted me to do a little reading on the subject. In several articles I read one book was referenced more than all others, The Study of History by Toynbee. It turns out that this is not just any book but, by most accounts, a masterpiece when it comes to Western Civilization. Let me explain why. 

Arnold Toynbee suggested in his book that the West was already in sharp decline. Why did he do this? The Study of History is a multi-volume study of civilization, in which Toynbee studied twenty-one different civilizations across the span of human existence and concluded that nineteen of those twenty-one collapsed when they reached the current moral state of the United States, but here was the shocking part for me: He first published The Study of History in 1931, and in 1931 he posited that the West was in sharp decline and was, according to him, “rotting from within.” Toynbee died in 1975, but I wonder what he would think of our culture today. Are we living in a culture rotting from within or is it already dead?

With this post, I begin a series on the West with the goal of answer the question, is the West in decline or has it already fallen? There are several other excellent books devoted to this topic. Oswald Spengler wrote The Decline of the West, Christopher Dawson wrote Religion and the Rise of Western Culture and Tom Holland wrote Dominion and each author grappled with the same concept regarding the decline of Western Culture. Is Western Culture dead or is it in decline? Let’s find out together. First, let’s explore how the West came to be.   

I begin with Samuel Gregg and his book, Reason, Faith and the Struggle for Western Civilization, which is also excellent when it comes to our topic. In his book, he offers his account of the West, which is like the others but also nuanced with some differences. Gregg argues that Western Civilization was conceived in a marriage of Jerusalem and Athens. His answer is like many others and yet he posits that Western Civilization was born through a marriage of “faith and philosophy” in a version of Christianity born in the West that embraced and applied both faith and reason as one. He sees this “one” coming out of ancient Judaism, which he suggests was a synthesis of both faith and reason as applied in the living of life in a new way. Life was no longer about survival, at least not in the West; there were advancements that made life better and allowed progressions in thought and religion. Gregg states that Judaismde-divinized nature” and was the first worldview/religion to completely reject the ancient idea that kings and rulers were divine and everyone else was to be under them. Judaism, unlike all other religions around it, offered the world a new king. Its rejection of the old idea was through a new view of the cosmos that was spiritually oriented. Judaism saw the cosmos as part of the created order of a universe created by a Holy God and because the universe was created by this Holy God it had order and intelligence and was not formless chaos as all others saw it. 

There was good, in time and space, and hope and all was not lost, according to Judaism, which was a much different narrative of the world than most other historical and religious narratives of the time. What Gregg was proposing was that in Judaism the Jews found a liberation of sorts of the cognitive from time and space. Judaism affirmed that there was a good God in heaven who was a Holy Creator God and that human beings were part of his created order, and not merely interchangeable parts of a larger machine. Human beings were seen as created in the image of this Creator God; they had purpose and were given responsibilities to live as moral beings in this created order. This was a radically different idea than all other ideas before it and what first makes Western Civilization unique. This was a vastly different worldview and would be distinctly western and a foundational mark of Western Civilization. 

The merging of Athens and Jerusalem cannot be underestimated as to its impact on Western Civilization and the Western mind, especially regarding our own current modern Western mindset in the United States. It is the United States that has been the pseudo-capital of Western Civilization for many years now, and it has been the United States that has served as the poster child of the West. The United States has impacted the West, including the Western mindset, more than most. And, now it is this mindset that has become compromised as referenced in part by Alan Bloom in his book, The Closing of the American Mind. It is the American mindset that was so free and so creative that now seems to be more vulnerable and more impacted than all others by the attacks against it. Bloom, in his book, attacks the moral relativism that he claimed was now in control of the colleges and universities. The very freedom brought to us by the West was the very thing being transformed before our eyes. Again, Bloom published his book in 1987, but he appeared to be saying some of the same things. The West, often seen through its colleges and universities, was in decline and dying back in 1987 according to Bloom.  

Back to Gregg, he references that Athens brought both contributions and obstacles to human thinking. It was Athens that was known for its skepticism, its irrationalities and its philosophies; most of them stood in stark contrast to the distinct and different worldview of Jerusalem (Judaism). So, how did they merge when all indications are that they should have clashed? The merging of Judaism and Greek thought, according to Gregg, predates Christianity, which is marked by the birth, death and resurrection of Jesus Christ. There can be no denying the impact of Jesus Christ on the world regardless of your belief about him. Prior to Jesus Christ, educated Jews were more than familiar with Greek thought and moved easily back and forth between Hellenistic and Jewish thinking. This was due to purely pragmatic reasons as the Romans controlled the world and therefore controlled thinking. The Romans were borrowers and refiners. They invented little of their own, but they borrowed from those they conquered and bettered what they borrowed. The Romans allowed those they conquered to keep certain elements of their own culture if they accepted the elements of the Roman culture considered important. It was the Jews who were different than all other cultures; it was the Jews who had this One God who refused to bow down to any other god. Both the Romans and the Greeks viewed the Jews as barbarians. Why? Ironically, it had little to do with their religion and more to do with their thinking and their disposition. The simple answer was that they were not Roman or Greek; the better answer would be to say that they were not Western prior to Christianity. So, there it is … a connection between Christianity and Western Civilization. In my next post, I will explore this connection, but until then …    

Critical Theory: Part VI

Critical Theory: Part VI

Critical Theory as the Norm

We have now come full circle to the point where the theory is to be normalized. As Horkheimer and others developed this theory, the initial intentions, I believe, were rooted in standardizing it in ways that positioned it to become “normalized” in culture. To do this positivism and interaction with it had to be addressed; it was, for all intents and purposes, foundational to almost everything … science, philosophy and even worldview. Horkheimer, in his essay, intentionally presented Critical Theory as if it had positivist intentions; he wrote, “In so far as this traditional conception of theory shows a tendency, it is towards a purely mathematical system of symbols. As elements of the theory, as components of the propositions and conclusions, there are ever fewer names of experimental objects and ever more numerous mathematical symbols.” While it appeared in this statement that Horkheimer embraced positivism, we learn later in his essay that he did not embrace positivism as it was, but as it needed to be for Critical Theory to assume its dominant position in culture. He saw the positivism that he encountered in much the same light as capitalism, as that which was “dominated by industrial production techniques,” or by the bourgeois, and as that which needed change.   

To combat the positivist dominance he encountered, Horkheimer, as I highlighted in an earlier post, destabilized traditional theory, which was foundational to positivism, allowing Critical Theory the needed space to surpass positivism. To do this, he believed that Critical Theory must be capable of doing two things: it must push traditional theory to view culture within a historical context, which I discussed at length earlier on why this was important, and its critique must incorporate all the social sciences. Horkheimer explained that a theory can only be considered “a true critical theory if it is explanatory, practical and normative,” but to do this required the presence of all social sciences in its foundation and its practice. His theory must explain social issues through practical means in responses that stay inside the parameters of the field addressed, much like traditional theory, but it also must speak to and address all of culture to change it. This was “critical” theory and Horkheimer and others created it to be much different than traditional theory. 

By offering a “critical” theory rooted in all social sciences that addressed a field while speaking to and addressing all of culture, Horkheimer presented a better and more improved theoretical option, but for whom? His “critical” theory was constructed to present Marxism as the norm and position it to assume the dominant positions of culture. Through his “critical” theory, he deconstructed “traditional” theory and its production for one reason; his perception was that traditional theory failed to address power and the status quo through the social sciences. Horkheimer presented Critical Theory as a theory that not only addressed power and the status quo but would use the former to deconstruct and the latter to fundamentally change the foundation of traditional theory, creating the means for Critical Theory to engage and transform culture.  

Traditional theory has long been confined to the field it served, and it worked best inside the parameters of that specific field because its goals were confirming true propositions within specific fields. Critical Theory, while technically not part of science, was built to interact aggressively with all fields, including science, for greater purposes. Its interests extended beyond specific fields and into culture itself, positioning itself as dominant over all fields for the purpose of changing culture and the norms of it. Critical Theory was to interact with all of culture through power structures where it assumed the dominant position. Its goals were not confined to one experiment or one field; instead, they were much larger and more broad, purposeful and directed at cultural transformation. 

Traditional theory had always focused on coherency and on the distinction between theory and praxis within intimate settings. It followed the Cogito in its view of knowledge, embracing the idea that knowledge was grounded in self-evident propositions, which Horkheimer used to introduce the idea of individual genius into the concept of traditional theory. Traditional theory typically explained facts through the application of universal truths or laws by subsumption that either confirmed or denied the truth proposition proposed. Horkheimer, as I discussed, posited that the “universal” part of the theoretical equation was rooted in the individual and not in the process, which rooted traditional theory in time and space, leaving it exposed. To confirm truth, traditional theory willingly partnered with positivism, rooting itself in an objective process, which had historically been considered the better option of confirming scientific investigative truth. Traditional theory would defend a scientific truth through empirical confirmation which embraced the idea of an objective world where knowledge was confirmed through empirical means, thought to be a mirror of reality. This view was not only rejected by Critical Theory but overrun and changed by it. 

Horkheimer and, for the most part, all the Frankfurt School, rejected the notion of the objectivity of knowledge due to its historical and social foundation, which, ironically, came courtesy of Horkheimer’s hand and was used later to normalize Critical Theory. Horkheimer wrote, “The facts which our senses present to us are socially performed in two ways: through the historical character of the object perceived and through the historical character of the perceiving organ.” In other words, Horkheimer, with this statement, was confirming that it was the individual genius of the observer that made traditional theory work, which positioned it in time and space, allowing it to be overrun by the “better” and more dominant Critical Theory. Traditional theory, with its roots in an objective view of knowledge, was now susceptible to Critical Theory because of its objectivity, which was now grounded in the time and space of the individual and not in the dialectic process of theory. This made traditional theory historical, which exposed its past dominance, making it as vulnerable to the criticisms of Marxism as everything else. Critical Theory, for Horkheimer, was that which would solve the issue of the “partiality” of the “culturally impacted” observer and of the past dominance of the oppressors; for Horkheimer, it was Critical Theory that would free individuals from what he saw as their entanglement in a social embedded perspective of interdependent oppression. 

Traditional theory, historically, had been evaluated through practical implications with no real practical consequences of significance outside its field; knowledge, as a mirror of reality, was more a “theoretically-oriented” tool than anything else, which clarified knowledge as a product and one that was objective. Critical theory was presented as “the” theory, void of any kind of bias towards knowledge that is objective; it presented itself as that which considered knowledge through functional relationships to ideologies and societal liberties. Considering this perception, knowledge becomes what Critical Theory needs it to be … societal critique, cultural action and subjective, directly impacted by the dominant and ultimately a means to transform reality. This is where we find ourselves today, living in a reality that is being transformed before our eyes. So, how do we recognize Critical Theory as we live each day?

Six Ways to Recognize Critical Theory:

Here are six ways to recognize Critical Theory in everyday life. First, Critical Theory views language as a social activity and as a vehicle of ideology so the adage, “sticks and stone may break my bones, but words will never hurt me” is wrong to Critical Theorists. Words will hurt you and are considered harmful to advocates of Critical Theory therefore, they need to be attacked and treated as acts, and those inside Critical Theory will treat them accordingly. Words will be attacked and treated as criminal acts. Second, there will be no rules associated with anything rooted in Critical Theory for one purpose; it is Critical Theory that is the ultimate authority, and it makes all the rules. If norms are being assessed and critiqued through Critical Theory, almost anything goes. Stealing, vandalism, rioting, and fighting are all justified if they are manifestations of the oppressed who are “rightfully” pushing back against their oppressors. Who are the oppressed? Well, they are anyone who has not been in a position of power in the past regardless of their personal efforts. Third, those critiquing and assessing generally have authority even if they have no experience in the area that they are critiquing and assessing. Their authority comes from believing and rooting themselves in Critical Theory, which is ultimately “the” authority over all fields and all theories. For example, a political philosopher critiquing a medical procedure with no formal medical training will have more authority than the medical professional due to the authority of Critical Theory. We can expect to see more of this if Critical Theory continues to rule. 

Continuing, fourth, there will generally be hypocrisy associated with any movement made under the authority of Critical Theory. For example, those condemning wealthy company CEOs and their high salaries who produce a product and provide viable employment to many will, with the same breath, embrace professional athletes and celebrities who are, in most cases, much wealthier than company CEOs but produce no product and contribute little to society other than entertainment. This phenomenon is a fascinating study waiting for someone to take the time to address it. Fifth, any movement in Critical Theory will trump tried and true established theories and truths in science, medicine or philosophy. It will be Critical Theory that pushes the agenda and the change in the field and not expertise or experience. We see this taking place in government, law and even medicine. And, finally, Critical Theory sees everything as embedded power structures existing in a binary world of oppressed and oppressors. Everyone is either looking to oppress or being oppressed. Everyone is either bias in some way or the victim of some sort of bias directed against them. Dominant norms that are good for society will be condemned, not on their merit or quality, but because they have existed in a dominant position for too long. Overall, we must remember that Critical Theory has as its foundation Marxism, and it will always have Marxist’s tendencies which identify it. Each of the examples I have presented have one thing in common: all of them are Marxist in nature. 

This is the world in which we live, and it is a Critical Theory world … for now! One thing I know, all things eventually come to an end. The Babylonians, The Roman Empire, the Greeks … all of them came to an end at some point and so too will Critical Theory. When that day comes, and the history for this movement is written what will it say?  

This concludes my series on Critical Theory. I could spend the next year on this one topic, but all things must come to an end; it is time to move on to something new. Thanks for reading and remember, thinking matters!     

Critical Theory: Part V

Critical Theory: Part V

The Deconstruction and Development of a Theory That Is Critical  

I have now arrived at the point where I will pull back the layers of development regarding Critical Theory. Theory, before Critical Theory, had, as part of its composite, elements that were analytically oriented towards analysis with a distinct dialectic tendency. With this dialectic tendency, theory was considered a well-substantiated explanation of an aspect of the natural world. It required fluidity with an analytical orientation which allowed theorists, especially those in science, to make predictions based upon it being testable under controlled conditions in experiments. In the case of philosophy, theories were evaluated through principles in abductive reasoning and pushed to withstand scrutiny; they were used to test a thesis though the development of an anti-thesis, which was thought to confirm whether the original thesis was true or false. This element of theory was, for Horkheimer, problematic as the dialectic was not only rooted in science but also a benefactor of positivist protection. It was perceived as a process that revealed true scientific tendencies through objective means (positivism). Horkheimer recognized that theory, left untouched, did not have roots or tendencies towards Marxism nor would it ever have any of those tendencies unless its foundational structure changed. 

Horkheimer began the deconstruction of general theory by making a connection between theory and society, which pulled a distinctly social element into the perception of theory. He established this connection through what he called the savant or the specialist. This was an important step in the deconstruction of theory; he wrote regarding the specialist, “Particular traits in the theoretical activity of the specialist are here elevated to the rank of universal categories of instances of the world-mind, the eternal Logos.” His point was to establish that it was the individual (a member of society) that was the universal when it came to the theoretical (theory) because, according to Horkheimer, the universal was not theoretical or dialectic; it was, instead, individual genius. It was this push towards individual genius that also established theory as historical. He explained that the decisive elements of theory were nothing more than those activities of society, which are “reduced” to the theoretical through the activities of individuals in society, and in the case of theory, the activities of a specialist or of individual genius were activities of society. This tied theory to the social through the individual (either as a specialist or through individual genius), and it was the individual that rooted theory in the historical through the time and space occupied by the individual.  

In his union of theory and the individual, Horkheimer created a bridge from the theoretical to the social via the individual through individual genius, but it was the specialist whose activity he labeled as “the power of creative origination.” This activity, to a Marxist, was production, which Horkheimer labeled “creative sovereignty of thought,” which reinforced that even the individual’s thoughts were social and historical. This effectively removed scientific theory from its privileged and protected positivist (objective truth) position and reduced it to a social action. This was a line in the sand for Horkheimer … a risk he was willing to take. The risk—attacking the legitimacy of all other theories grounded in the scientific through his new “critical theory—was well worth it for him. Coming out of World War II and the oppressive reign of the Nazi war machine, he believed people were open to this radical change he proposed, especially if it “appeared” to bring back the civil liberties and the freedoms they had lost. 

For Critical Theory to live beyond its inception, it would need the idea of theory (the theoretical) to be re-cast as a different perception with a different semantical interpretation, one that embraced Critical Theory without requiring Critical Theory to embrace old ideas of theory, change to them or be compromised by them as applied by science. This new theory Horkheimer proposed had to exist as dominant while bringing change to the theories it encountered in ways that pushed them towards tendencies that were critical and Marxist, and the only way to do this was for it (Critical Theory) to be authoritative. When encountering all other theories it had to be “the” critical theory in each interaction. From my perspective, I do not believe this could have happened at any other point in history; after World War II and the Nazi regime’s widespread oppression, Horkheimer saw an opportunity and took it.  

Horkheimer, to usher in this change, pushed the theoretical to the point of instability, which produced doubt, setting it up to be re-established as “the” critical theory to remove the doubt that was now there. Theory, for Horkheimer, was now where it needed to be; it was no longer theoretical in any protected sense but instead, it was a true means of production. Its perception was now more a social function or an individual decision than anything theoretical, which made it part of production, which he labeled as a “production of unity,” which reduced production to that of a product. Horkheimer never saw production as something that produces a product; he saw production only as a product of culture, manifesting in the same ways as other cultural products. For him, it had to be a means of production that could be controlled by Marxism. If production was no longer a process of “becoming,” then it would be open to “becoming” something new, something with Marxist tendencies, especially if it was firmly entrenched in the social and the historic. As a product that was social and historic, it would now be oriented towards individual tendencies (the savant or the specialist), opening it up to cultural changes and semantical shifts with distinctly Marxist orientations. 

As a product, the process of production was now categorical, easily manipulated and positioned to be re-formed in a different light. Horkheimer’s attack comes full circle, when he wrote, “In reality, the scientific calling is only one, non-independent element in the work of historical activity of man, but in such a philosophy the former replaces the latter.” Linking theory to history allowed it to be supple in much the same way history was, which positioned theory to be pliable … more open to revisions, changes and the influence of propaganda, which would allow it to be impacted by the orientations of scholars and theorists addressing it in much the same way history was addressed. This pliability that was now attached to scientific theory was no longer fluid in any natural sense but mechanical in every aspect of its movement. Its movements were intentional, which allowed it to be manipulated through the power functions of those overseeing it. It would become dependent on individuals and their interpretations, orientations and contexts and it would no longer be dialectic. This created space for Critical Theory to move into and take over the theoretical through the individual.  

As I read Horkheimer’s essay, his attack on the theoretical was on full display; he saw all dominant theories and philosophies, as well as those objects we perceive as natural—cities, towns, fields, and woods—as bearing the marks of man and shaped by man’s oppression. They were products of society to him, the means of production and in a perfect Marxist world, equally distributed to all and not left to the bourgeois to manage and control. He was clearly now viewing theory, through a distinctly Marxist lens, as social and historical. Theory was part of society and tainted in all the same ways; Horkheimer wrote regarding society, “The existence of society has either been founded directly on oppression or been the blind outcome of conflicting forces, but in any event not the result of conscious spontaneity on the part of free individuals.” This one statement about society was also to be applied to theory, prior to his deconstruction of it. He saw society as that which was built intentionally with ill intentions. He wrote regarding this thought, “As man reflectively records reality, he separates and rejoins pieces of it, and concentrates on some particulars while failing to notice others.” Those concepts of recording, separating and rejoining are conscious intentional actions impacted by the beliefs and values of those individuals determining the recorded, separated and rejoined. Horkheimer bemoaned the intentionality of society and saw its structure as intentionally created to give the bourgeois everything at the expense of everyone else, and yet he used it to deconstruct theory and recreate it as Critical Theory.  

What Horkheimer initiated so many years ago has come to fruition. Horkheimer has essentially replaced theory with a “critical” theory that is analytically and distinctly Marxist. He took the theoretical, and its dialectic orientation and replaced its praxis with a Marxist one. The authoritative nature of theory, which has been assumed, especially in science, to possess an objective ability to confirm what is true, has now been taken over by a Marxist orientation with intentions oriented towards Marxist truisms. It is Marxist tendencies that are now dominant inside theory. They have been reconfigured to analyze other non-Marxist theories in critical ways … to cast doubt on them until they are overrun by this new configured “critical” theory. In the end, they either submit to it or die. This is Critical Theory; it was created to be “the” critical theory of all theories and to leave Marxism in a dominant position in science and ultimately in society. This is where we find our world today … right where Horkheimer and his colleagues had hoped it would be. It is Critical Theory that drives the ideas in our colleges, pushes the bills in our government and changes the norms in our culture and every idea, bill and norm has tendencies that are critical and distinctly Marxist. As we look at our culture and ask, how did we get here? There is but one answer … Critical Theory! Stay tuned for the next installment of this series. Until then, remember, thinking does matter!   

Critical Theory: Part IV

Critical Theory: Part IV

The Creation of “Critical” Theory

In my last post, I suggested that Horkheimer, to create his new “theory” had to re-create the general idea of theory itself. I posited my own assertions, which, if I am honest, are based strictly on my reading of his essay and my own convictions formed from that reading, which is my attempt to stay inside the spirit of Critical Theory. I tried to limit my secondary sources and keep his essay front and center with little to no outside interference. Right or wrong, I am left with my own assertions; whether they are corroborated by others or even valid seems to me of little consequence considering what I have read so far in his essay. 

In a world of Critical Theory, one of the first impressions that came to me was this one: there are no rules. I need only to assert my ideas in persuasive sincere ways and that should be enough, but that is problem. It should never be enough; It should require more because, like it or not, they are my perceptions and those will always be based on me. It is the same with Marxists, Idealists and Pragmatists; beliefs and values always turn into perceptions. The important point is not to assert your perceptions as true but to determine whether your perceptions are true. Let’s jump right into this post with my own assertion.

My initial assertion regarding Horkheimer’s work on theory begins with this thought: to make it “critical” would require it be fully “critical” from all angels and for all situations. The only way I see this being accomplished is if Critical Theory becomes the dominant theory over all other theories. Regardless of the validity of my claim, to answer that question I must examine, in detail, his attack on theory, or the theoretical (I reference it at times this way to clarify it from the categorical or the practical), because whether right or wrong, it reads as an attack to me. My perception is impacted by his statements; for example, he wrote, “The traditional idea of theory is based on scientific activity as carried on within the division of labor at a particular stage in the latter’s development.” Here Horkheimer seemed to direct his attack against theory by linking traditional theory directly to the action of the individual engaged in the theory, and in doing so, he presented the idea that the scientific action of a theory was no different than any “other activities of a society.” What I believe he was positing was that the individual action of employing a theory was, in substance and essence, no different than the individual action of a teacher, a coach or any other person acting according to their own convictions as a member of society; they are all social actions, which, in a subtle way, places science in a position to be overrun by Marxism. 

I do not believe his point here was to destroy theory but, to keep it in a state of flux to be used for his purposes. I believe he meant to link current theory to the individual for the purposes of giving the individual power over the theoretical process, which ultimately was a connection to the means of production via the individual. Inside Marxism, the individual and their actions would always be considered a means of production. Regarding this, he wrote, “… the real social function of science is not made manifest; it speaks not of what theory means in human life, but only what it means in the isolated sphere in which for historical reasons it comes into existence.” In this one statement, he reinforced his reduction of science to that of a social function and presented it with Marxist tendencies: as a means of production. With no pressure to defend his assertion, he proceeded to “deconstruct” out the “positivist protection” that science enjoyed, which would always present it as true and never as a means of production.   

The result of this deconstruction was a reduction of the theoretical to a position where it would meet all the requirements to be a means of production, but he also does something else that was equally important. He plants into the discussion a subtle historical reference (“historical reasons”). This historical reference completes this reduction of science, from that of a theoretical process with positivist protection (my phrase) to one that was now an action resulting from an individual choice in time and space. This is important because, as a historical reference, the reduction of science was complete, He had moved it out of the theoretical realm and placed it firmly into the social realm, where it will be at the mercy of Marxism as a means of production.

Horkheimer next took aim at society, which he viewed from a distinctly Marxist perspective. He defined society, as “the result of all the work” of all sectors of production in culture. His negative view of capitalism stemmed from his belief that it was the bourgeois in a capitalist state who would be the ones benefitting from the labor of everyone in society, which, for him, was categorically unfair and oppressive. This categorical oppressiveness, for Horkheimer, even extended to all ideas and thoughts of a society. For Horkheimer the necessity of re-establishing the conception of theory in a Marxist tradition was priority one in his development of a new “critical” theory fully capable of competing against those theories already established and dominant. For it to have a chance it needed a cultural foundation that would welcome it and allow it to grow and to have that foundation, he would have to destroy the ideas of capitalism, which operated on ideas like supply and demand and Smith’s invisible hand, which would be almost impossible to control from a Marxist position. He would borrow the radical doubt of Descartes to accomplish this task.  

Horkheimer understood that every dominant theory, once doubted, would be less dominant and more vulnerable. For Critical Theory to take hold it had to be the critical lens used in analysis of all other theories and part of that critical analysis had to be doubt, which once used in analysis, by Critical Theory was left attached to the theory analyzed. Horkheimer’s goal was that one day Critical Theory would be the dominant worldview, but the current state of science, with its theoretical roots in qualitative and quantitative methodology, would destroy Critical Theory if it were not first destroyed. For Horkheimer, this was his motivation for his attack on the concept of theory. It was also why he used the radical doubt of Descartes in his critical analysis of theory to change it and then recreate it in the image of Critical Theory.

In his essay, Horkheimer dictated how theory was to be recreated semantically and culturally to reflect Marxist beliefs, and then he labeled this theory as “critical” and used it for critical analysis over all other theories. Horkheimer hoped to accomplish two tasks with his recreation of theory: he would eliminate the original idea of theory, which was a threat to Critical Theory, and recreated it in the image of Marxism. Second, he would use this recreated and reconstructed theory as vehicle to deliver Critical Theory in ways that would assert it as a worldview and as dominant. As we look out at our world, what do we see today? We see those dominant theories of the past willfully submitting to the whims and desires of Critical Theory. 

It is Critical Theory that has become the lens of critical analysis, leading the charge to canceling dominant theories of the past and open the cultural door for new theories to come rushing in and these theories connect with no other theories. They make no sense when it comes to science or even medicine and yet, the take hold, are defended and are profoundly impact culture. We need only to look back and ask a few questions. When it comes to Critical Theory, where is dialectic thought? What about the antithesis? We see those theories of the past cast into the darkness of doubt by the shadow of Critical Theory, and they either align with or, in some cases, are replaced by Critical Theory or they fade away and die.  

This is our world today. It is a world where Critical Theory has become more dominant than we even realize, and that is by design. My next post will explore how theory became “critical” theory, with the hope of educating all of us in ways that will help us identify Critical Theory and its impact. Until then …  

Critical Theory: Part III

Critical Theory: Part III

The Crack in the Door 

When examining Critical Theory through the eyes of Max Horkheimer, we can see a bit of what makes it unique and different. Let me begin with a quote from Horkheimer; early in his essay, he wrote, “There is always on one hand, the conceptually formational knowledge and on the other hand, the facts to be subsumed under it. Such a subsumption or establishing a relation between the simple perception or verification of a fact and the conceptual structure of our knowing is called its theoretical explanation.” Here Horkheimer began to subtly push a shift in our knowing, pushing it into a realm that was almost that of ascendency where facts are now subsumed to our knowing, which is a distinctly Marxist tendency, but it was also very much an attack on current knowledge, albeit subtle. This is an important distinction to remember as we move forward. 

Horkheimer unpacked the idea of theory early in his essay, starting with a statement regarding the essence of theory; he wrote, “What scientists in various fields regard as the essence of the theory thus corresponds, in fact, to the immediate tasks they set for themselves.” This one statement, in my opinion, supported his earlier assertion regarding our knowing; that it is powerful, dominant and impactful on theory. He goes on to use words like “manipulation” and “supplied” to reinforce his concept of theory over one that so many have held as the standard and the determiner of our knowing. He was asserting that it was nothing more than an intentional task of an individual, inside their own individual essence, which was the application of his new assertion. Because he believed knowing was rooted in our own ascendency, he also believed it was nothing more than an individual choice, which established the concept of theory as he needed it to be presented … as personal, individual and rooted in historical and social condition. He wrote of “the manipulation of the physical nature” in the context of the “amassing of a body of knowledge” such as was supplied in an ordered set of hypotheses to imply, again, that there was individual intentionality to the idea of a theory. 

Horkheimer wrote of the influence of the subject matter on a theory and vice versa, calling the process “not only an intrascientific process but a social one as well.” It was important for him to defend his assertion of ascendency as his assertion was also an attack on the social dominant thinking of the day, which he identified earlier as theological in nature. The essay, at this point, reads as a scanning of the landscape, in some respects, of the theoretical in search of useful tools to develop and apply in ways that created and developed a distinctly Marxist ascendency at the expense of the current dominant theological one. We can identify the tools he found useful through the points of his deconstruction.

One such example was his references to the Positivists and to the Pragmatists. He brought attention to the fact that both had similar connections between the theoretical and the social, which he questioned as to whether either one was useful or even scientific. He pointed to the scientific task, and to the scientific community and to their own general call in such situations as a call that was a “sense of practical purpose” and a “belief in social value” when his assertion was that they were both nothing more than “a personal conviction”. This is a primary example of Horkheimer taking both concepts, deconstructing them to the point of doubt in their current states, with the purpose of reconstructing them later inside Critical Theory for beneficial purposes of enhancing and supporting Critical Theory. 

The dominant idea of theory, to Horkheimer, was always under his attack. He took the current idea of theory found in science and employed his deconstruction/reconstruction dichotomy to develop the future one he intended to employ. He began by acknowledging that scientists in various fields regard their own tasks as the essence of theory, and the key word here was “essence,” which he already established as that which was rooted in man. This idea of essence destabilized general scientific theory, pushing it away from the theoretical and towards the practical, pragmatic and even personal; it was the personal that destabilized theory to the point of existence. Once theory is personal it is no longer theoretical but instead opinion or perception, which positions it to be developed into something completely different. This was the brilliance of Horkheimer on display in Critical Theory.  

The conception of theory, for Horkheimer, was “grounded in the inner nature of knowledge” for the purpose of establishing it as historical, which, again, pushed it completely away from the theoretical, reducing it all the way down past the personal to an “ideological category.” Once it was categorical it become vulnerable to manipulation and in a state of readiness, for the purposes and intentions of the manipulator. Horkheimer wrote, “That new views in fact win out is due to concrete historical circumstances, even if the scientist himself may be determined to change his views only by immanent motives.” He goes on to note that concrete historical circumstances, while important, inside science tend to succumb to “genius and accident.” Here, again, is the brilliance of his deconstructive process on display as he casted doubt into the very essence of theory as it was currently known and employed. 

As Horkheimer explained why social conditions are not considered as much as other factors, he made an interesting reference. He wrote, “The theoreticians of knowledge usually rely here on a concept of theology which only in appearance is immanent to their science.” That one statement was followed by a reference to new definitions and how they are drawn, depending on directions and goals of research. This was the process of Critical Theory and the beginning of the end of general scientific theory. It would never again be as dominant despite the efforts of many. It took many years and consistent and continuous pounding away at the foundations by generations of Critical Theorists, but here we sit in the place that Horkheimer imagined many years ago. Critical Theory has become that which is imposing its will on all other theories.  

As we read Horkheimer, we are experiencing his deconstruction/reconstruction process, as it was used on the concept of theory inside science. It was this deconstruction of theory that was the crack in the door, so to speak, that opened for Critical Theory to come barging into culture. It is through Critical Theory that so many other different theories entered our culture and pushed their way into battling the norms of culture, but each would have never been granted access under the old concept of theory. The idea of the theoretical had to be destroyed for Critical Theory to take hold due to its subjective nature and its Marxist tendencies.

This concludes this post. Stay tune as I unpack more of Horkheimer’s essay with the hope that it will help us understand more about our world and the impact of Critical Theory on it. Until then … 

Critical Theory: Part I

Clarity for the Obscure

This post begins a series on Critical Theory as I attempt to bring a little clarity to that which is obscure, or at least seems obscure. It is always difficult to bring clarity to something that seeks to remain obscure (please note this reference). Is this the nature of Critical Theory or does it just appear this way to those of us unfamiliar with it? The conjectural nature of Critical Theory does position it to be distorted but is that distortion just part of its fabric or is it intentional? Good questions that demand answers, which is the purpose of this series. It will be a bit like nailing Jello to the wall … you will soon see what I mean. 

Let’s begin with the Stanford Encyclopedia of Philosophy, which describes Critical Theory as a phrase that “does not refer to one theory but, instead, to a family of theories” which are designed to critique society through the assimilation of chosen normative perceptions through the empirical analysis of current societal norms. I know what you are thinking … what does all of that mean? Hidden behind this loquacious description is an agenda that is intent on many things but do not miss that changing the world is one of those intents. 

Let’s begin by dissecting this murky explanation of Critical Theory provided to us. What it says to us is that Critical Theory was intentionally created to be integrated in manners that disrupt the dominant norms of society through an intentionally-created analysis to deconstruct dominant norms into fragments that can then generate a praxis of sorts which can be applied to current culture, produce norms with Marxist tendencies. Whew! I am not sure that I provide much clarity, but in short, the idea is to provide Marxism an opportunity to become a worldview that can be applied in all situations throughs ways in which it can become the dominant worldview. Again, the goal is to gain a dominant foothold in mainstream society. All references to Critical Theory (and it is always capitalized as a proper noun) are references to the work of several generations of philosophers and theorists, all with foundations in the Marxist tradition. It is truly not just one theory but many theories working together for one common goal. Clear as mud, right. Let me provide a little historical context with the hope that it adds some lucidity.  

The whole idea started with the son of Herman Weil. Herman Weil was an exporter of grain. He made a fortune exporting grain from Argentina to Europe. Felix Weil inherited his father’s fortune, but instead of using it to broaden the family business, he used it to found an institute devoted to the study of German society through a distinctly Marxist approach. Not long after the initial inception, the Institute of Social Research, as it was to be known, was formed and formally recognized by the Ministry of Education as part of the Goethe University Frankfurt. The first appointed director was Carl Grunberg (1923-29), a Marxist professor from the University of Vienna. The institute was known for its work which combined philosophy and social science, two distinct and separate fields of study at the time, in ways that were informed by Marxism. As for the term, Max Horkheimer first defined it in his essay, “Traditional and Critical Theory,” in 1937. I will be referencing and quoting from this essay in this series. 

Today, Critical Theory, is composed of many different strands of emerging forms of engagement in all areas of culture, all coming together to destabilize current dominant norms into positions of weakness. In these positions of weakness, the intent is to introduce forms of Critical Theory that eventually erode the dominant ideas and replace them with ideas rooted in and composed of Marxism. The entire process was an attempt to normalize Marxism and package it in a way that allowed it to be transformed into the norms of society. This became known as the “Frankfurt School” of critical theory, and as we will find out, they were very successful. 

This school is not really a “school” in any sense of the word but a loosely held (critical) tradition or belief system that is bonded by critiques on how to best define and develop the (critical) tradition in ways that will push it into mainstream society. Marxism’s largest deficit was thought to be its absence in mainstream society; it was thought that if it could just be applied and lived out by more people it would be embraced and change culture. The movement was meant to correct this perceived deficit through a more expansive means that would extend its roots deep into culture and provide more people the means to embrace it. The initial efforts of the (critical) tradition attempted to combine philosophy and social science into an applicable theory that would serve as a door into mainstream culture; it was created with “liberating intent” (with a goal of freeing society from the current dominant norms), but here is an important part of the application of this theory. These philosophers were patient; they understood that what they wanted to accomplish would take time. It would actually take generations of philosophers pursuing the same theories in the same manners to claim any ground in mainstream society. The first generation of these philosophers were, what has been called, “methodologically innovative” in their approach to developing this (critical) tradition. Marxism was their vehicle of change; it was also their product, which they hoped would become dominant part of society. They integrated it with the work of Sigmund Freud, Max Weber and Fredrich Nietzsche, each had made their own inroads in society, using their work in secondary ways to develop a model of critique anchored in, what is known today as, Critical Theory.

Some of the prominent first-generation philosophers were Max Horkheimer, Theodor W. Adorno, Herbert Marcuse, Walter Benjamin and Jurgen Habermas, who is still a important figure of second-generation philosophers in Critical Theory. In what is sometimes known as the third sense of Critical Theory, the work of Michel Foucault and Jacque Derrida was referenced and used to advance the tradition due to their associations with psychoanalysis and post-structuralism, with particular interest in Derrida’s theories of deconstruction. Once a workable tradition (theory) was created, it was used as a means of analysis of a wide range of phenomena—from authoritarianism to capitalism to democracy. Each analysis drew Critical Theory closer to the pillars of society ­—the family, the church and the school—and to the replacement of a moral paideia with one with Marxism foundations. Today, we see evidence of its presence in a wide range of cultural norms, including in how we live, think and act. Its influence is wide and deep and extends into many areas of current culture in such a complete way that there are elements of Critical Theory in our lives that we don’t even consider Critical Theory. 

As I close this post, my goal was to give you a macro-picture of Critical Theory. I hope you are now a little closer to understanding it than you were before you read this. In my ensuing posts, I will begin to unpack the tradition so that we not only understand it, but we can also identify it and the areas of our own lives it is impacting. This is why thinking matters to all of us.   

My Two Cents … on the election

This is a new feature for this blog. I call it, “My Two Cents.” In this section, I will give my take on events that have taken place in our world. As is so often the case, most things are not what they seem to be at first. We live in a world that pushes us to react in our thinking and in our doing; we should avoid both. In this section, I hope to slow things down a bit and ask a few questions that force us to look at issues from a few different perspectives. Please keep in mind that this feature is only about my take on things, whether right or wrong, and nothing more. So, let’s jump right into this first post.

What just happened? I am sure many of you are shaking you head and asking yourself the same question. I think it is safe to say what did take place was not what anyone expected. I don’t pretend to be an expert, but here is my two cents on the election, which I am not even sure is worth that much.

This election cycle presented two flawed candidates with various issues, that much we know. The media and the Democrats (This election made it clear that they are on the same side.) did all that they could to ensure that this election was a referendum on the character of one of those candidates, Donald Trump. They used past tactics in some of the same ways. They called Mr. Trump names; they brought criminal charges against him; they dug up his past, spread vile fifth (some true and some not), slandered him, insulted his supporters, all with the goal always being the same … to destroy his reputation. For whatever reason, it did not work, but something else happened. The more they did, the more resolve he developed and the more people came to his side. It was quite astounding, if you ask me.

The other candidate entered the election with 107 days left in the election … not ideal. She was put into a difficult situation by her own party, which demanded some things from her that, in my opinion, she just did not have. She never campaigned in the primaries which meant that she never presented her platform. Campaigning in the primaries generally does two things for a candidate: it puts you in front of voters and it provides you valuable experience talking about your platform. She benefited from neither of those, and to make matters worse, she did not give one official press conference as the Democratic presidential candidate, which, in my opinion, was a huge mistake. By not holding a press conference, she never presented herself as presidential in any official capacity. Overcoming one of these issues would have been daunting; overcoming all of them in 107 days was almost impossible. She adopted an old campaign strategy, which I am sure was a party decision, mudslinging, thinking it would do the job. The media was glad to assist her, but the strategy failed and I think it actually made her situation worse.

So, what happened on election day? Why were so many so wrong?

I think it started four years ago with the election of President Biden; many were suspicious, right or wrong, of the results. This suspicion carried over to the policies he put into place (At least we think it was Biden who put them into place.). When these policies did not work, and they did not, the suspicion grew. Four years later, as Americans voted, they did so while experiencing inflation with record high prices. They voted while watching a world that was growing increasingly more hostile, and they voted not really knowing who was running the country. The last four years have been very hard on most Americans, and anybody who says it has not is delusional. There is a growing sense that neither Republican nor Democrat appeared aware of how hard it has been. Most Americans view both parties in a negative light and that only grew in the last four years. Senators and Representatives alike were seen as power mongers, corrupt and out of touch with the struggles of most Americans. To his credit, Donald Trump, for some reason, did not miss this; instead, he used it. He listened and connect with the struggles of Americans and his remarks always seem to resonate with a majority of Americans, especially those struggling. He did not cater to the celebrities and elites and they despised him, and this did nothing but make him more popular with most Americans. Unfortunately, for Vice President Harris and the Democratic Party, this was a big miss for them.

Another important issue that impacted this election, in my opinion, was how out of touch the Democrats’ responses were. I believe both parties are out of touch with average Americans, but it was the Democrats who continued to look out of touch. For whatever reason, they did not respond to the struggles of Americans with real answers; the insulted, lectured and blamed others. It was always someone else’s fault. Instead of looking within, they looked out, made excuses and insulted those who supported and then voted for Mr. Trump. They compounded their issue by labeled Trump supporters and voters as uneducated (see the post on this topic) and ignorant. Even after the election, they continue to scold and call Trump voters names, but here is the problem with all of that. I believe there were many who voted for Mr. Trump this time around who voted with Democrats in the last election. So, while the Democratic Party was insulting all of these voters, they were actually insulting many who had been loyal to them in the past. 

The Democratic Party is another issue all its own. For many years, the party has been moving to extreme positions in agenda, message and belief. In this election, for the first time, there are clear indications that the Democratic Party has moved beyond the comfort level of many Americans, and even some Democrats. I was shocked to see prominent Democrats, some professional athletes and even some celebrities come out in support of Mr. Trump. This should scare the Democratic Party. These last four years, while many Americans have been struggling to make ends meet, the Democratic Party has continued to push extreme positions, asking struggling Americans to sacrifice yet again for a position or a policy that would seem to bring more hardship than help. Many Americans questioned these causes and expressed their frustration, but the Democratic Party’s response was to double down on them. That was a losing proposition that came to fruition on the 5th of November, but there were signs earlier of this growing unrest. The Democratic Party placed blame every where but where it belonged, with them. It was Biden; it was age; it was racism, but in the end, the people did what the party should have done and placed the blame squarely where it belongs … on them.

A third issue is one that I have already referenced, extremism. The Democratic party has been embracing extreme positions for years, but in this election, there were signs that it pushed itself so far left that it was now out of touch with most Americans. In my opinion, there were only three groups of people voting for the Democratic Party candidate this time around. The Democratic Party base, which are those who always vote Democrat no matter what, extreme liberals, as many of the agenda items and positions of the Democratic party are now extreme liberal positions and those who depended heavily on the government and need the government. Most of the Americans not falling into these three groups, in my opinion, voted for Donald Trump, despite the vilification of him. In Mr. Trump, they saw real answers to their needs and they also saw someone who would and could fight for their rights. I believe most Americans would rather a job than a handout, but all indications are that the Democratic Party believes the opposite.

One final issue is what I call the celebrity factor. I really feel this did the Democrats no favors. Many of the most vocal supporters of the Democratic party were celebrities, professional athletes and wealthy liberals. Some of the antics of these supporters in the last weeks and days were offensive and so out of touch with what most Americans had been dealing with on a daily basis that it offended many, even those who have voted for Democratic candidates in the past. Wealthy celebrities, professional athletes and liberals were seen through the very lens that the liberal left worked so hard to build for the right; they were seen as privileged, spoiled and out and of touch. I feel that their support actually hurt the party and hurt Vice President Harris. Her SNL appearance, in my opinion, was a huge factor in this race as It confirmed what many Americans had been thinking already and cemented the link between her and those living lives of luxury and privilege.

In the end, people voted practically and pragmatically. They voted with their pocket books and for their future. It is clear: the Democrats have some work to do if they are going to fix this. The Democrats lost this election not because most Americans are misogynistic or are racist, as some Democratic pundits are already saying. They did not lose this election because of uneducated, stupid or ignorant voters. They lost this election because they ran a lazy bad campaign. They lost this election because they are the now the party of the extreme left, embracing positions and policies that most Americans question. What is their next step? Well, I don’t think calling the millions of Americans who voted for Donald Trump racist bigots is a good first step, especially when they will have to win some of those voters back for the next election. This election was very personally to many and a message was sent. Did the Democrats, and for that matter, the Republicans, get the message? Time will tell.

I believe there needs to be two viable parties to keep a balance of power in this country. The Democratic Party is needed, but they need to find their way out of this malaise in order to be viable again. If they don’t, then in two years, they will fail again. The Republican Party has a mandate. Do they know what it is? If they fail to get some things done for the people then, in two years, they will fail as well. We live in interesting times.

The Uneducated Voter

As the results of this election settle into the collective minds of the country, the responses we are seeing are interesting. One of the words lobbied about in reference to the Trump voter is the word, “uneducated.” The Washington Times, The Washington Post, Time Magazine and even Sunny Hostin of The View, a Notre Dame graduate, have all labeled Trump voters as uneducated. A professor at the University of South Carolina has gone as far as calling the Trump voters stupid and “holding the rest of us hostage,” which prompts the question, what do all these people mean by the term, “uneducated?”

Well, let’s start by examining what it means to be educated. In a keynote address in 2011, J. Casey Hurley, spoke on this very issue. In his address, he gave, what many consider a solid definition of educated, he presented a six virtue definition, stating that we “know that understanding imagination, strong character, courage, humility and generosity are the six virtues of the educated person.” There are many other definitions out there and most have to do with high test scores, content, course work and degrees, but do those really define who is educated and who is not or who is intelligent and who is not?

Back to Hurley, he acknowledged that many of today’s schools “are driven by a definition that says educated people are those who score high on standardized tests.” He deems this not useful and missing the mark. Why? He goes back to the six virtue definition and acknowledges that educated people should have virtues, the question is, which ones? He proposes these six virtues for multiple reasons. They produce other virtues; they are a recipe for improving every learning situation; they provide an answer for how to teach, and finally, they are inspiring. There are many others who endorse this definition, or something like it, but the question is, are Sunny Hostin and others right in labelling Trump voters as uneducated or stupid?

Sunny Hostin was blessed to attend Notre Dame where she earned a law degree. Many of the pundits we listen to each day attended Ivy League institutions or other prestigious universities. Does attending a prestigious insitution quantify as educated? Or, does attending a prestigious institution equate to club membership? That is now a more difficult question than it used to be. In the past, the answer was clearly the affirmative due to admission and entrance requirements. These institutions were looking for the best and the brightest, but today they are merely looking for a specific ideology. Is a Harvard-educated hedge fund manager better educated than an electrician? Probably, but what about intelligence? 


The question, in my opinion, is one of equity more than intelligence. If the electrician was afforded the same opportunities as the hedge fund manager would the electrician achieve the same results? I would actually put money on the electrician to outperform the hedge fund manager. Why? Chances are the hedge fund manager was born in privilege (which is almost a prerequisite for club membership) and educated in privilege (in the club) which suggests a life with little to no struggle. How do we learn? Well, there is a good bit of evidence supporting the fact that dissonance is necessary to learning. My bet is the electrician had lots of struggle to get to the point of being an electrician, which equates to real learning, which is why I would put my money on the electrician. So, back to our issue, is it an educated thought to label all Trump voters as uneducated? Probably not; actually, the term all of these people should be using is “schooled.” Trump voters are, in general, not as schooled as Harris voters. That much is true, but that has more to do with equity than it does with intelligence.

  
Now there is something interesting about most of those connecting Trump voters to the term, “uneducated;” many of them are doing it in vile and hateful ways, which begs the question, are hateful people more intelligent than kind people. According to Psychological Science, there is research to support that those with lower cognitive abilities feel more prejudice and more bias towards others, especially those who are different, than those with higher cognitive abilities. The article summed it up with the following quote, “Hateful people are typically simple people.” Those assuming that the thoughts and motivations of those people who voted for Trump are uneducated and stupid are, in essence, exerting a prejudice and bias towards those who are different than they are … those who did not vote the same way they did. There are many idioms and metaphors to describe this but let’s just say it is hypocritical and leave it at that.

   
Are Trump voters really uneducated? Maybe, but that has nothing to do with whether they are intelligent or not. All it means is that they tend to not be educated in Ivy League institutions or prestigious institutions, which is logical considering most of these institutions hold to liberal positions and seek a liberal ideology when it comes to admissions. Sure, there may be some conservative intelligent people who attend these places, but most go elsewhere for various reasons, which brings us back to the term “schooled.” Most Trump voters probably were not schooled in Ivy League institutions or prestigious universities, but that has nothing to do with whether they are intelligent or not, which is the implication of most of those spewing these thoughts.

On matters of intelligence, when you vote for one party all the time as if that party has all the answers … well, that is a sign of something other than intelligence, and that applies to both parties. One party, one person, one ideology and even one philosophy is never right all the time. That is statistically impossible and logically improbable no matter your belief system. Pretending the dichotomy of right and wrong or true and false does not exist is not intelligent; it is actually delusional.  So, to answer the original question once and for all, the answer is no, due to a confusion of terms. The better word is schooled and the answer to that questions is yes, they are most likely not as schooled as Harris supporters, but as far as intelligence, they are as intelligent as anyone else. They may not be part of the club or afforded the opportunities of the elites, but neither issue has anything to do with their intelligence. Again, this why it is important to not let others do your thinking for you. Think your own thoughts and do your own research and you will find that your efforts will be rewarded. Thinking still matters!

Existentialism: Part VI

Part VI: Living an Existential Life

With existentialism being so abstract, how does one live inside its philosophy? This is the last question I will tackle in this series in this last post. 

I begin with a quote from Le Monde, a Parisian newspaper who attempted to define existentialism in 1945. In their December edition, they admitted that “Existentialism, like faith, cannot be explained; it can only be lived.” 

A few posts back I referenced that it is indeed more a faith than a philosophy. Why is that? One of the main reasons is that it bases conduct on a belief in individual freedom more than anything else. One is free to choose one’s own conduct, but here is the difficult part, inside that freedom there is a belief that no objective moral order exists independent of the human being. It is up to each human being to create his or her own moral order by way of living it and affirming it through their own authenticity as they live. I don’t know about you, but that seems a bit daunting. 

Existentialism, you could say, is obsessed with individual authenticity—how individuals choose to live their lives. It rests on some bold ontological speculations, about what does and does not exist. One of the weightiest speculations is the belief that there is no god or entity outside of the human being; therefore, moral values do not exist outside of the human being. There are no moral absolutes nor are there universal laws or codes of ethics that apply to all of us. Values come to us as we live our lives in authentic ways. If we live our lives as if values were given to us by God or existed outside of our being, that would amount to existential sin: it would equate to living a life refusing to face the freedom you have been given to live your own authentic life, but from where does that thought come? Is it even a valid thought if it comes to us from others? You can see the dilemma we face. This individual authenticity, it is very important to the existentialist. 

Inside an existential world, every individual is responsible for deciding, on their own, how to evaluate their choices, and it is only through those individual choices given to them by the individual freedom they have that values come, but do they? An existentialist believes that it is the action rather than the principle that creates value but is the action not principled action, especially if it applies only to the individual. To value one action as more important than any other action is to prioritize it—to set it apart as an ideal, which is value, is it not? That ideal is what we strive to achieve as we live our lives. In existentialism, it is authenticity; in the Christian faith, it is the glory of God. Is there a difference? When we choose to act in a certain way, we are choosing what we think is the right as it applies to us. Inside existentialism, we are to live for ourselves; inside Christianity, we are to live for others. The only difference is the direction; in existentialism, all actions are directed inward to self, but inside Christianity all actions should be directed outward to others.

Existentialism, as we have referenced, does not believe human beings have a pre-existing nature or character, but in many ways, it instills this belief as an existing nature. We are “existentially” free to become “self-created beings” by virtue of our actions and our choices but is that not an existing nature that must take hold of us for us to live as existentially-free individuals? We are told that we possess absolute freedom … that we are free to choose and this truth is so self-evident to us, or it should be, that it never needs to be proven or argued. Again, is that not a pre-existing nature or maybe the better word is condition. 

There is acknowledgement that no one chooses who they want to be completely. Even Sartre recognized this and he also recognized that each person has a set of natural and social properties that influence who we become, which we might refer to as social conditioning. He gave them a name, “facticity.” Here is where, in my opinion, essentialism gets a little upside down. Sartre thought that one’s facticity contained properties that others could discover about us but that we would not see or acknowledge ourselves. Some examples of these are gender, weight, height, race, class and nationality. There are others but it was thought that we, as individuals, would hardly every spend time examining these ourselves and yet, today, many spend all their time lamenting them or agonizing over them. An existentialist would describe these as an objective account not capable of describing the subjective experience of what it means to be our own unique individual. As we look out at our world, what we see is the breakdown of not only society but of existential philosophy.   

Existentialism came to age between the years of 1940 – 1945, during and after WWII, which was a unique time, especially when considering the views of freedom and choice in Europe at the time. Europe, at this time, was, in my opinion, the perfect storm for existentialism to blom and grow. Its focus on individual freedom was so very appealing to those coming out of war-torn Europe who had lost all freedoms for many years. The appeal was every bit as emotional as it was intellectual. Sartre was quoted as saying, “If man is nothing but that which he makes himself then no one is bound by fate, or by forces outside their control.” He was pushing the idea that only by exercising personal freedom could people regain the civil liberties they had lost, which, was taking advantage of the situation and the state of those coming out of the war having lost everything.   

There is a problem and a price to be paid for the freedom to do whatever you want when every you want, which existentialism advocated, and that price was steep. In such a culture, everyone gets to have that same freedom, even those who oppose your right to freedom. Coming out of a war that took everyone’s freedom, individual freedom was embraced and even needed to repair and restore, but with came a burden that we are no just realizing. There is really no such thing as individual freedom unless you live alone on a remote island. Any type of freedom, especially one advocating that every choice that is ours is ours alone will eventually affect others. There is just no way around this. 

In the situation coming out of a long war, the burden was light as our individual choices were directed at restoring those individual freedoms lost, but eventually those individual freedoms would move beyond our own individual freedoms and seek other things beyond us. The desires would extend beyond what we had and seek what we were owed and what we deserved. It is in those times that this light individual burden became heavy and hard. Sartre recognized these times and presented an explanation. He said it is in these hard times that we adopt a cover of sorts to escape the pressures of choices that extend beyond us, which he called those choices “bad faith.” He said that we used “bad faith” when the pressure of choice was so overwhelming that one pretends there was no freedom after all. Sartre would say that this is a special kind of self-deception or a betrayal of who one really, but there is also evidence that this “bad faith” was a personal betrayal of existentialism. It was a desire for more … more freedom … more liberties and more rights. Sartre would claim that this “bad faith” was merely a denial of the freedom afford to us, but who will deny freedom? He claimed that one common form of deny one’s freedom was to present excuses for one’s behavior, but is not an excuse presented in a situation as a means of justifying a wrong action knowing the right one? Again, this is another sizable hole in existentialism.

As I close this series, let me summarize the main tenets of existentialism and present a few questions to consider in response to each. 

First, true existentialists believe individuals should embrace their own freedom, and that everyone has the freedom to make their own choices and these choices will and should define who we are. The problem with individual freedom, as I have referenced, is that it often comes at the expense of someone else’s freedom, unless, again, one lives as a hermit or in paralysis. The other issue of freedom is this one: There is no such thing as individual freedom. Everyone lives in some sort of community where are choices infringe upon others, which makes most of our choices not individual.  

Second, true existentialists acknowledge the absurdity of life. They believe that life is absurd and devoid of inherent meaning which, for them, prompts individuals to create their own meaning and values through their own choice, but is this absurdity pre-existing either in culture or as a thought? It is presented as ever-present which is pre-existing unless it comes from the individual living freely in a world where everyone is living their own different life, which does make absurdity a reality. My question is this, does this individual freedom contribute to the absurdity or create it?

Third, true existentialists believe in accepting responsibility for one’s own actions. They believe, and rightly so, that with freedom comes responsibility and one should own one’s decisions and the consequences that come as the result of them. They believe doing this will empower one to live authentically and with integrity, which I am in full support of living with both, but the question is will living an existential life produce both? What we have seen is that living authentically does not necessarily lead one to live with integrity, which also suggests something else is involved in life. In most cases, integrity never reveals itself in isolation as there is no opportunity to put it in practice. Most of the time we put integrity into practice in our interactions with other when we place them as more important than ourselves. How can we do that if living our best existential life is to live an authentic individually-free life?  

And, finally, true existentialists believe in living authentically at all costs. They strive to be true to themselves and to avoid conforming to cultural or societal expectations and norms. The key to authenticity to an existentialist is to understand one’s desires and values and live in accordance with them to the best of one’s ability. This is existentialism, but is it, really? As I have pointed out there are some real issues of consistency and causation that must be addressed to make sense of this world in which we live, whether we are existentialists, Christians, atheists, agnostics or aliens.  

As I close, the idea of existentialism tends to scare most when they hear the term, but the reality is that it is another philosophy trying to make sense of the world in much the same way we are. At the end of the day, I think we all want the same thing … for the world to make a little more sense to us than it did yesterday. I hope this has been a fruitful experience for those who have joined me on this journey. I hope this has pushed you think a little deeper and to spend a little more time considering different thoughts. I hope you don’t see difference as threat, but as that friend that sees the world differently than you do. You may not agree with him, but he makes you better because he pushes you to think about the things you want even stop and think about with his prodding. Difference is not something to be afraid of if you can think. This why thinking matters … always! Blessings! 

Existentialism: Part 5

Part V: The Manifestation of Existentialism and Its Miscalculation 

In my first post, I posited that we live in a culture dominated by existentialism with most of us unaware of its supremacy. I also referenced that two of the loftier goals of existentialism were personal freedom and personal responsibility, but what seems to be more prevalent in culture currently are their opposites. Very few take any kind of personal responsibility anymore, choosing instead to judge or cancel and freedom has all but disappeared, replaced by affirmation and acceptance, which have more to do with attention and recognition.

Every cultural change that has been “thrust” upon us (I use Sartre’s word intentionally.) moves us beyond original ideas, which is normal for culture, but in areas of freedom the cultural movement has been substantial in recent years. In the past, there was truthful (I hesitate to say true) freedom of speech. I may not have liked what some had to say, but I supported their right to say it and they did the same for me but that mindset has become hard to find. Say the wrong thing and risk being canceled. Post the wrong thing, even in the past, and be canceled. That is not freedom of speech; that is attacking the very idea that gave the right to hold such a view. The attacks do not come from one side but from all sides. Those on one side blame the other side and vice versa. Everyone wants to blame everyone else, but the blame is ours … all of us. Those looking to judge and cancel, do so from behind a curtain we have built and continue to support … a social media account, an obscure email or a nondescript text message. The informal restriction of freedom is here, and unless something changes, it will become formal soon. All of this, in my humble opinion, is a manifestation of existentialism’s miscalculation, which is the subject of this post. 

Existentialism’s advocacy, in my opinion, for agency and condition regarding man is not the problem. The problem, as I see it, is the failure to address human nature, which is and has been a foundational issue in philosophical circles forever. That failure has left much unexplained and wide gaps of inconsistencies, which weakens all philosophical approaches, especially existentialism. The question regarding human nature is still there, despite the effort to remove it from the conversation, and there is still many referencing its presence. Existentialism untethered man, like no other philosophical approach before it, from his religious moorings, giving him boundless freedom and power; what did he do with it? Well, to be honest, that is the issue. Nothing changed; nothing was different. Man did what he had done in the past; he is no closer to the truth than he was prior to existentialism. However, man does appear to be more broken than before, which suggests to many, whether right thinking or wrong thinking, that there is something to the issue of human nature after all, especially considering all that is new to culture.   

I think Sartre, Nietzsche, Kierkegaard, and maybe even Camus would be surprised, maybe even shocked at where we are in culture today. There would be astonishment as to why we have not evolved past crime, selfishness or deceit. If a human being does not have a pre-ordained nature, why does he keep repeating the same mistakes over and over as if he did? If we develop and create our own essence, is there no means to learn from past mistakes? I believe there would be little support from the past for the canceling of others as that betrays several foundational beliefs of existentialism, especially in the areas of personal freedom and authenticity. Sartre did acknowledge that man is conditioned by culture, but he still advocated for man to fight against this conditioning. The issue of human nature, however, is still an issue. 

Let’s look at this issue through a different angle; let’s look at it through the lie, as all of us are familiar with it and can follow its progression. If man’s nature is not bent towards the lie and instead, it is bent towards the truth or its neutral alternative, then from where does the lie arise? It cannot flow out of the nature of man if man has no preordained nature to lie, as there would be no nature from which it could flow, nor can it flow out of a neutral nature due to the constancy we see regarding the prevalence of lies. If nature in neutral we would see lies but we would not see it so widespread, seemingly in everyone. Therefore, the only other option available to us is that it must be conditioned into us through societal influences, but there are issues with that thinking as well. With no preordained nature, we are told that our essence is created and developed through our own agency. There are those who advocate lying as a means of self-perseveration or as the manifestation of confusion as one contemplates how to live in an absurd world, but both of those do not answer this question, why do small children lie? 

As the father of two children who are now grown, I distinctly remember not teaching them to lie when they were small. On the contrary, my wife and I tried very hard to teach them to tell the truth. We did not send them to a place where they were taught to lie. Everything we did was done with the goal of telling the truth. Even before they attended school, they lied. Why is that? How do we explain the lie in small children without including in our explanation an innate nature? How do we explain that we all have lied and continue to lie without including in that explanation an innate human nature predisposed to lying? I admit the issue is more convoluted than simple, but it does present a dilemma. 

I have no answers to offer other than the one we do not want to hear … All indications are that we do have a preordained nature that predisposes us to lie. I am open to other options, but for me, this option checks more boxes than any other option. This is just one issue; there are others, but they all come back to this issue of essence. If we create and develop our own essence, from where do we develop a disposition that lies and is capable of doing other more serious offences? Conservatives, liberals, atheists and even existentialists will have many fundamental disagreements on many issues, but on these issues, there is a consensus. No one endorses lying, murder, theft or any other heinous act, and yet, they continue to exist. Why? 

An existentialist would suggest, as I stated earlier, that they are the result of the problem or the confusion faced in the search for meaning in life that is, to existentialists, absurd, but that is a weak retort if, inside the same philosophy, we acknowledge the astounding ability to develop and create our own essence. Would this confusion that causes us to lie not also affect the creation and development of our essence? As you can see, there are more questions than answers, but I do believe there are enough questions to justify more discussion. I do respect the stance an existentialist takes in the complete rejection of murder on the grounds of it infringing upon another’s efforts to live an authentic life, but would that rejection not, itself, be an infringement on the murder’s life and the attempt to live it? There are difficult questions with seemingly no easy answers.  

Sartre would suggest that one’s freedom cannot place a limit on the content of choice, again, a hard stance to take in certain situations; he valued the manner of the choice more than the actual choice itself, but still the choice, according to Sartre rested completely within the individual or at least it should. Yes, existentialists believe life is absurd, but in an absurd world, there is plenty of room for order and structure, especially if creation and development contribute to both. For Sartre and other existentialists, it always came back to the idea of freedom and how it was defined. Inside existentialism freedom is always defined as an individual choice, which is confined to and owned by the individual. 

Here is my issue. Individual freedom, which is owned and confined to the individual, will move outside the individual at some point if exercised. One never exercises individual choice in a vacuum. Individual freedom that splashes over into crime will act upon another; the same can be said of individual freedom splashing into altruism. The issue in both cases is that individual freedom is no longer individual; once it is put into action in community, and it will, it moves away from the individual, interacts with others and infringes on them. Nihilism rejected the idea of morals and values for this very reason while existentialism embraces individual morals and values, presenting a dilemma. When it comes to morals and values, can they be held in isolation by the individual or does that place the individual into a kind of moral paralysis or turn the individual into a moral hermit? 

Questions like these are why thinking matters. I will have one final post on this topic. Until then …