Deconstructing Deconstructivism: Part IV

Deconstructing Deconstructivism: Part IV

            We have examined deconstructivism, but there is one question that remains: how are we to respond to it? I would suggest that any response begins by, first, coming to an understanding of instability as it is defined by deconstructivism. There are questions that come with any speculation of instability as an organic state of language. What if its organic state was, instead, something else? What if instability was merely the tension of determination? Derrida provided some support for this type thinking when he referred to the openness of instability as “aporia,” describing it as a puzzle or quandary (Jackson & Massei, 2012). What if “aporia” was that which was imposed over meaning by deconstructivism? Nothing is for certain and should not be thought of in those terms.  

Let’s begin this post, instead, with some hard truth; instability is a part of life, but it is a part of life we fight against. No one wants to be unstable, even when it comes to language. We seek clarity not confusion, especially in our communication. What do we do with confusion? We look for ways to clarify it and eliminate it, and yet deconstructivism seems, to me, to seek to keep it. I see it seeking to be the means of clarity. Do I dare go further? I think it goes beyond clarity and seeks to be “the” means of meaning. What other purpose would it have for keeping instability alive, especially in language? Let’s back up a bit and look at what instability does, if left on its own. Simply, it destroys stability. As we have studied deconstructivism, we have referenced its interaction with norms. Instability undermines stability, especially when it comes to stable norms. Derrida advocated that an important part of the process of deconstructivism was to keep asking questions, which is a theoretical device used to keep meaning and language from falling into a sameness, which is never seen as a critical tool of analysis or as positive. Sameness is never welcomed in critical analysis and always viewed with suspicion and as bias. 

Derrida saw both language and thought as living in what he called binary opposition, which he suggested was a confirmation of the instability of language. He saw language relying on opposing concepts like good/evil, true/false and happy/sad to sustain itself. He did not see these as part of the natural state of language but as constructions imposed on meaning and language by human beings. What is language if not a tool of communication for human beings? I do not see language as entity unto itself; I see it strictly as a tool used by human beings to communicate. In my reading of Derrida, I did not get the sense that he saw language in the same light as I see it. He saw these binary oppositions as existing within a dangerous possibility: that one term would be given the privileged status over the other term, thus affecting the natural state and balance of language. He claimed that this privileged status (one term over the other) prevented meaning from “disseminating out beyond its initial intended meaning” in multiple directions, which assumes language is not a tool but an entity unto itself. One question I have regarding binary oppositions is this one: do they not define each other? Is not dark the absence of light? Is not false the answer that is not the right answer? What is the alternative if these binary oppositions are removed? I don’t see them as constructions of human beings but instead, as observations of human beings. Human beings did not create dark or truth or even good. They observed its presence or its absence through, in many cases, its binary opposite.

When it comes to communication, I do not seek to protect two binary opposed meanings, at least not when I am seeking to be clear in my communication. Communication, for me, is determining shared meaning for the purposes of effective and clear communication. It is understanding meaning and embracing the same meaning. Did Derrida see language impacted by context or was he afraid of the impact of context? I am not sure. Derrida claimed to have seen language and thought as indecideable (his word), a term he used to describe meaning as having no clear resolution, which, from my perspective, leaves language in one place … in a state of confusion, which could also be referenced as instability. Is this what he saw or is this what he needed language to be for deconstructivism to grow and thrive? How we see and respond to deconstructivism will do one of two things: it will either feed it or starve it and kill it. 

Deconstructivism if often referenced with terms like unpacking, destabilizing and undermining in regard to its interaction with norms, which it would define those that are stable as assumptions, as binaries and as privileged. These are intentionally negative terms designed, in my opinion, for them to be unpacked or destabilized. But, again, what if the theory of deconstructivism is wrong when it comes to norms? What if instability is not a natural state but instead, one created for the purposes of destabilizing those norms that are stable? If this is the case, then we would need to confirm through a dialectic method whether deconstructivism is viable or not. When it comes to literary theory, deconstructivism operates in literary theory by encouraging us to read literature closely but with skepticism, questioning binary oppositions, resisting final interpretations and embracing ambiguity. When we put all these words together—skepticism, questioning, resisting and ambiguity—what do we get? These words encourage doubt, challenge authority and embrace uncertainty, which could be summed up in one word, instability. The question then becomes does deconstructivism identify instability or produce it? 

Considering this question, I think we must, first, understand deconstructivism for what it is. I am not advocating that it produces instability, but I am say that there does exist a possibility that it does. Therefore, we cannot assume that it does, nor can we assume that it does not. It is important to understand that any disagreement with its principles—its skepticism of fixed meanings, rejection of absolute truth and tendency towards destabilizing established frameworks—if not done critically and constructively will be engaging it in the very manner being criticized and result in confusion or ambiguity, which is exactly what deconstructivism wants and, in many ways, needs. In this series, I have tried to provide a picture of this theoretical position from different angles for the purpose of understanding. Ignorance is offering criticism of that which we do not understand without understanding; analysis is offering constructing critical analysis in a thoughtful, respectful and knowledgeable manner. Back to our question, how do we respond to deconstructivism?

Let’s begin by seeking to understand what we believe and subjecting our own beliefs to the same analysis to confirm whether our beliefs are true or not. So many of us are unwilling to do that but must be willing to do that if we are seeking truth. We must, next, understand that our perceptions, as right and as true as they feel, are only our perceptions. They are not reality or even true, at times. Sometimes they are true and other times they are not. Most of the time they are built and re-enforced by someone else’s perceptions, which should be analyzed as well. For example, I have been advocating in this series in subtle ways that one of the weaknesses of deconstructivism is its lack of focus on the pragmatic reality of communication. To communicate, we need “shared linguistic and cultural frameworks,” and my example of that is language. English speakers do not communicate well in other parts of the world if they are monolingual or unwilling to engage in the language of the region in some way. If they expect everyone to speak English and have a very superficial view of communication, then they will struggle to communicate because they are allowed instability to reign and seek no action to clarify. There are other aspects of communication like culture, attitude, countenance and a willingness to engage and communicate. If none of these are engaged, communication will be lacking and remain ambiguous and confused. That sounds nothing like the state of communication needed to effectively communicate, and yet, that is a practical example, albeit simple, of deconstructivism at its simplest level.   

As we engage deconstructivism, and you will engage it, it will be helpful to you to recognize it. How will you do that? Let’s start with its tendency to blur all distinctions. Not only will it seek to destabilize stable norms, but it will blur clear distinctions which tend to lead to relativism, which is another sign of the presence of deconstructivism. Where do we see this? Right now, the most prominent place we are seeing this is in the blurring of the genders, male and female. This is clear indication of the presence and the impact of deconstructivism, but it is also an opportunity to address deconstructivism’s weakness when it comes to practicality and real-world applications. While there is a blurring of the genders (per deconstructivism) there is not a blurring of the product of this burring, which is contradictory and an opportunity to determine its validity that we need not miss. Again, our response depends on our ability to identify the presence and impact of deconstructivism and then respond respectfully and lovingly to it inside its own theoretical methodology. This means we must understand it, something most of us are unwilling to do. It is helpful and intelligent to read and study both sides of an issue. As difficult as this is to do, to really understand and respond well, we must do this. Another tendency of deconstructivism is its push towards ambiguity, which is not applicable in several vocational situations, especially in areas like medicine and engineering. We should not blindly and emotionally reject deconstructivism outright because of these two examples but use them by applying them back on top of deconstructivism as a means of pointing out weaknesses, gaps and breakdowns and asking questions.

Deconstructivism is a critical theory that is used in academics effectively in micro-situations, but its struggles, like most academic theory, begin when it is applied in culture in real-world macro-situations or used to push an agenda and change behavior. Any theory, good or bad, if applied in similar situations, will produce similar results. We should respond as civilized respectful human beings with a critical eye towards its application in wrong settings to learn more about it and use it to pursue truth. In the right settings, it is effective in rooting out bad theory and paving the way for good theory, but in the wrong settings, it quickly becomes a hammer akin to propaganda used by those with malicious intent to inflict their ideas on others via power, and that is not considered ethical nor critical analysis.  This concludes this series on deconstructivism. I hope you enjoyed it. Until next time …     

Derrida, Jacques. (1988). “Derrida and difference.” (David Wood & Robert Bernaconi, Trans.). Evanston, IL: Northwestern University Press. (Original work published 1982).

Deconstructing Deconstructivism: Part III

Deconstructing Deconstructivism: Part III

In this post, I jump back into the rabbit hole known as deconstructivism. Let me begin with this statement: the process of deconstruction is not the opposite of anything, but instead, it is a means of instability. This one statement will color everything else in this post. This much I know—deconstructivism is prevalent in our culture. It perceives any control or order outside of itself as detrimental, unnatural and as a threat to itself. It is built to attack all of this for the sake of its own preservation. A word of caution before reading this post … it is longer that usually and that is due to jumping into the world of philosophy. In that world, language’s importance cannot be understated. It is the primary tool through which philosophical thought is communicated, analyzed and debated. I do not plan to go down into the depths necessary to adequately explain language’s importance to philosophy, but I do plan to dig a little deeper than normal. So, let’s get started. 

When discussing deconstructivism or any other philosophical theory, the role of language must be addressed. Language is a communication system that involves words and systemic rules that organizes those words for the purpose of communication. We need language but so does philosophy and its theories. Language, as one of the main forms of communication, is important to philosophy, but before we get into why we need to understand the configuration of language. Language, as a form of communication, has specific components; two of the most important ones are a lexicon and a grammar. A lexicon refers to the words used by the given language. These words have meaning which must be understood to communicate. The grammar is a set of agreed-upon rules used by the lexicon to convey meaning. Without an agreed-upon lexicon and grammar, all communication would be ineffective. Therefore, language is used by philosophy as vehicle of change to deliver its theories and communicate them; deconstructivism, however, took this idea to another level, as we shall see. Over the next several paragraphs, I hope to accomplish two tasks. First, I hope to address how deconstructivism delivered this change, and second, I hope to address the change that was delivered. 

How does deconstructivism deliver change? You have probably already presumed that language was involved in some way, and you are correct. Deconstructivism, like all other theories, uses language as means of delivery, but deconstructivism does something no other theory has done … it goes beyond using it for communicative purposes and challenges its authority, or its grammar, by way of tension. It posited that the natural state of language was not fixed or absolute but unstable and fluid. This one fundamental belief does a lot of heavy lifting for deconstructivism. It provides a posture of change in both the lexicon and the grammar of language. Most philosophers assume a prejudice of general language to justify creating their own language. There are many reasons for this; some pure; some not. My point is that when they do this, they assume control of the language and the power associated with it. Deconstructivism is similar in approach but different in scope. It did create some of its own language, but it did this to control all of language. Language is its means of delivering change, but unlike other theories, the scope of change extends beyond its theory and to all of language and culture. It sought to position itself to be the lexicon and the grammar of all language for the purpose of culture coming under its control. How did it do this? 

It began with an attack on norms. Any past or pre-established norm was considered a threat to deconstructivism due to stability. Deconstructivism posited that stability is not a norm’s natural state but is, instead, a sign that a norm has moved away from its natural state of instability. The first battle began with language. Is there any bigger norm out there? If it could deconstruct language and re-create it in a way under its control, then, nothing was out of its reach. Culture, in many ways, is defined by its norms, and there is no other factor as impactful as language. We may quibble over whether it is a norm or not, but it does color the culture in which it lives. Norms along with language are two of the standards that define culture. If we understand this, we may better understand some of the political battles taking place and why the fight is so intense. What is at stake? The answer is our norms. They define our culture, and they define us. 

Norms are norms because they are behaviors or mindsets considered acceptable by most people of a specific culture despite their own individual beliefs. Norms that are stable define our culture and define who we are, but stable norms are under the constant attack of deconstructivism for one simple reason: stability threatens deconstructivism. Why? We only need to go back to the first paragraph and remember that deconstructivism is not the opposite of anything, but instead, it is a means of instability. Norms that are stable produce consistency, sameness and constancy. Stability is often seen as the opposite of change, and when it comes to human behavior, stability is seen as the neurological basis for consistent habits which involve the stabilization neural information. Stability makes change more difficult, and it makes control next to impossible. For deconstructivism to impact culture, it needed instability to be a norm and then it had to become “the” norm of all norms. How did it do that? It created cultural instability and then became the stability in the instability. Establishing instability as the natural state of language allowed language to be the vehicle of change. It was language that did the work of deconstructivism; it was language that delivered instability to culture. 

Deconstructivism still had to address those norms that were most dominant. Deconstructing any norm requires the general population of the culture in which it lives to embrace the change. Support for a change to something stable will only be accepted if there is initial suspicion of the norm. This suspicion flows out of the instability of the norm, which would have been established by deconstructivism. When a tried-and-true norm is perceived as unstable, our human condition takes over rendering us suspicious of it. We begin doubting it and the other norms associated with it. You have experienced it over the last several years … removing statues of past leaders, attacking the integrity of institutions put in place to protect and serve and even promoting bias and oppression as good. This is how deconstructivism delivers the change it needs to live. It has changed you and me, and it is fundamentally changing culture.   

What change has deconstructivism presented as normal? That one is easy; it is instability. Instability comes in many forms. It is tension and doubt. It is skepticism and isolation. What do these things do to us? Well, they weaken our foundations and punch holes into our existing norms, reducing everything to its lowest form, which makes us doubt everything.  When we do this everything is vulnerable to the dominant idea of the day, which would be, you guessed it, deconstructivism. Norms are not just commonly held beliefs; they are guard rails of the highway we call culture. Removing them does not bring freedom but danger. A culture without norms was what Derrida wanted because he wanted deconstructivism to be the guard rails of culture. He would call such a state the “absence of presence” or the “already-always present,” and he would embrace it because it would be a culture of instability. Derrida would refer to such a situation as “trace” and see it as a means of stripping away the “supposed” contradictions of language, opening it to new “true” meaning. He would call it “the absent presence of imprints on our words and their meanings before we speak about them” (Jackson & Mazzei, 2012, p. 19).

This is deconstructivism in its truest form. It is “the norm” that defines all other norms by pushing every other one to instability while it remains as the lone stable dominant norm. We see and experience it every day. It is present in our media and especially in our government. If you listen, you will hear it. Truth is no longer that which is true, but that which is repeated and situational. Any belief in anything stable is to be challenged because everything must be unstable. The media is no longer the watch dog of the people but the mechanism of manipulation and change. Anything presented as an absolute is attacked, viewed with suspicion and perceived in negative ways for one simple reason: it is a threat to instability. Then, there is suspicion—we have become suspicious of everything. This is the impact of deconstructivism.

Suspicion is only a short path to the cliff of paranoia. Those who are suspicious of everything eventually doubt everything, which is a form of paranoia. What do you trust in culture? What do you know to be true in culture? Are you concerned that there are no longer real answers to these questions? This is deconstructivism. It works by giving everyone access to itself through suspicion brought on by instability. Someone said to me, but we have community, don’t we? We are told that we have community, but what we really have is isolation. Our “community” is no longer in-person but one of technology. We text, tweet, post and email more than we talk in person; that is not community. That is living inside instability and calling it other things: individualism, preference, perception and self-preservation. Make no mistake, these are not elements of community but elements of instability and deconstructivism.

This world of deconstructivism is a strange world. It is a world where everyone is king. The problem is that when everyone is king, no one is king, except the one who made everyone king. We embrace and encourage selfishness. We no longer talk about integrity and honor. Difference is a means to an end, and we have eviscerated any idea of excellence by calling it intolerance. Everyone has become a judge without ever looking in a mirror. Integrity has been pushed aside and replaced by self-preservation and empathy for others has evaporated into the air. Hard work is viewed with suspicion and all forms of submission are labeled as oppression. We choose criticism over encouragement, negativity over positivity, selfishness over selflessness and materialism over minimalism. This is our world. How are we to respond to it? That is for another day and another post. Until then …  

  Derrida, Jacques. (1988). “Derrida and difference.” (David Wood & Robert Bernaconi, Trans.). Evanston, IL: Northwestern University Press. (Original work published 1982).

Deconstructing Deconstructionism: Part II

Deconstructing Deconstructivism: Part II

Looking at the process of deconstruction through the lens of deconstructivism is a bit like looking at the world through the eyes of Alice as she looks at the world through the looking glass; you can see shapes and colors, but nothing is clear. Derrida explained the process of deconstruction in a curious way when he stated that, “[it] acquires its value only from its inscription in a chain of possible substitutions, in what is too blithely called a context” (Derrida, 1985, p.2). Derrida presented deconstructivism as an organic act of creation found inside language, but he also presented it as that which was only determined by the context of its use. It is this one word, “only” that provides deconstructivism its protection, which is its ambiguity. Contexts are different and always changing. If deconstructivism is creation determined by the context with which it interacts inside language, then it is never the same and always evolving into something different. My point is that the process of deconstruction is an action of instability acting on that with which it interacts. This we do know. What we do not know is whether its interaction is an act of imposition or of revelation? 

I would like to suggest that the use of the term “organic” is an intentionally heavy term, and more calculated than not. Derrida claimed that he did not create deconstruction but found it as it was, always “going on around us,” which, interesting enough, was in the same state in which he claimed to have found language and meaning. They both, according to Derrida, were found … as unstable in their natural and true state, which begs the question: is instability their nature and true state? Were they found unstable before their interaction with deconstructivism or as the result of their interaction with deconstructivism? This is an important point because we know that there is instability in the world; what we do not know is whether this instability is organic, manufactured or a combination of both, especially when it comes to language?

Each morning, you and I awake to an unstable world. You can feel it just like I can. I am old enough to remember the stability of the world years ago. Sure, there were issues but there was decency and common sense; there is now tension and instability in their place. Both are now norms, replacing the stable ones of the past. It is disconcerting to me that stability is now perceived as a negative in relation to language and meaning. Words have meaning and will always have meaning. That should never change and yet, it has. In the next several paragraphs, I will present a case that deconstructivism, like its cousins, Marxism and Critical Theory, is intentionally providing the means to deconstruct stable norms and replace them with unstable ones for one reason: power.

Jackson and Mazzei, in their book, Thinking with Theory in Qualitative Research, describe their views of deconstructivism, which are directly linked to Derrida’s views. Jackson and Mazzei quoted Derrida when they wrote, “Deconstruction in a nutshell is the tension between memory, fidelity, the preservation of something that has been given to us, and, at the same time, heterogeneity, something absolutely new, and a break” (Derrida, 1997, p.6). The process of deconstruction is now an accepted part of qualitative research. It creates tension which allows it to be analytical, but it also needs this tension for itself. The process of deconstruction required tension to become an organic part of language, but to maintain this status it also needed man to be perceived as a threat to it because, like every other theory, there will be men and women who challenge it, as there should be.

Derrida thought—and I think he is right on this— that we (human beings) perceive tension as negative and seek to move away from it or eliminate it whenever we can, which would be detrimental to deconstructivism. Derrida understood that, as people, we tend to reject tension and seek stability, especially in our language. This would destroy the process of deconstruction. Derrida wanted tension … he needed tension, and he needed it to be embraced and accepted as a natural part of meaning and language, but he knew that would only happen if instability was language’s true and natural state. Jackson and Mazzei posited that deconstructivism’s presence will be where we find “unsettling,” or a “ruffling” of current normative structures (Jackson & Massei, 2012). This is part of the analytical nature of research, and part of the process of deconstruction, which began as theory, but has now extended into everyday life. Tension and instability, which are part of our world, are presented as evidence of the presence of the process of deconstruction, which I acknowledge, but what I struggle to acknowledge is that both are also presented as evidence of the true and natural state of language. 

What I believe instead is that the process of deconstruction is acting upon language, producing both tension and instability. It would be akin to me making the case that all trees exist in the natural state of being cut down, which I label as downcut. When they stand erect and grow, I label this as an imposed will upon them and not organic to them; instead, their natural and true state is downcut. My evidence in support of my theory is my ability to take my ax and chop down a tree. As the tree falls to the ground, I present it as evidence of the presence of downcut and as evidence of a tree’s natural and true state. Is that evidence of its natural state or of me (downcut) acting upon that tree with my ax? Is this unsettling or ruffling of a stable norm an indication of the presence of the process of deconstruction or is it simply change, adjustment or the imposed will of the process of deconstruction on that with which it is interacting? This is the confusing world of deconstructivism and why it is worth exploring. It is a roller coaster ride with plenty of ups and downs. There is much more to address. Please come back for the next post as I continue to try and deconstruct deconstructivism. Until then … 

Derrida, Jacques. (1988). “Derrida and difference.” (David Wood & Robert Bernaconi, Trans.). Evanston, IL: Northwestern University Press. (Original work published 1982).

Deconstructing Deconstructivism: Part I

Deconstructing Deconstructivism.

Deconstructivism, another theory critically aimed at the norms of culture, is a theory that has impacted all of us and yet, most of us have never heard of it. To understand it is, at best, to attempt to understand it because, be warned, it is ambiguous and vague. It is like nailing Jello to the wall; once you think you understand it, it interacts with something else and changes. Deconstructivism is change and difference and criticism and tension all rolled up into what I see as varied disparity. It first appeared in a 1967 book entitled, On Grammatology, and has grown in reference and documentation ever since.

Let’s begin with a quote. Jacques Derrida, in his article, Letter to a Japanese Friend, explained deconstructivism to his friend by insisting that it is “an event that does not await the deliberation, consciousness or organization of a subject or even modernity” (Derrida, 1985, p.2). If this seems a rather odd way to describe an event, you are right, but it is not an event that he is describing; it is deconstructivism. Inside that seemingly innocuous description is affirmation to deconstructivism’s metaphysical reality. Derrida stated to his friend (Professor Izutsu) that to define it or even translate the word “deconstruction” would take away from it, which is a suggestion to its nature and to its protection. How does one disagree with that which cannot be defined or translated? The answer is simple: one does not because one cannot.

Derrida, in my opinion, was stating that deconstruction was a notion of a reality rooted in situational agency. It was designed to avoid the confined corner; to avoid the proverbial box or the closed door and to assert its own agency in interaction with individualism (or context) as a means of truth. According to Derrida, it was and is a critical methodology that analyzes how meaning is organically constructed and deconstructed within language, which we all understand to be the primary means of communication between human beings, and yet it is not language that seems to be under attack. Instead, language seems, to me, to be the vehicle of delivery for deconstructionism.

Let’s be clear; deconstructivism is not a form of Marxism nor of Critical Theory, but it is related to both, although indirectly. The process itself claims to reveal the instability of language, which it presents as language’s true and natural state. Is language unstable or is language, as my cynical mind suspects, being pushed to instability by deconstructionism? I would like to posit a question: if instability were not language’s true and natural state, could deconstructivism determine language’s state or would it, instead, change its state? I am not sure, but I look forward to exploring that possibility and others. I do know this; its existence depends on instability of language and meaning.  

Back to this question, is language unstable? Yes and no! I think language is like anything else; it works from instability to stability. I know I seek clarity in my communication and one of the ways I do that is to ensure that meaning is consistent with those with which I am communicating. How is language unstable? I believe language is unstable if meaning inside language is unstable. How does that instability remain and not work towards stability? One of the methods of maintaining instability is through addition. When other meanings are added to true meaning, clarity is not produced but instead, instability is maintained.  Addition, for me, creates instability, especially when it comes to language. If we have found instability as a state of language and this discovery was the direct result of deconstructivism’s interaction with language, then, there is another more difficult question to consider. Is the instability of language its true and natural state or is it a direct result of deconstructivism’s interaction with language? 

Deconstructivism claims that one of its goals is to push meaning to its “natural” limits and expose its “true” nature which, according to Derrida, is instability heavily dependent on difference (addition). I am not a big fan of coincidences and see them as problematic. Here is my issue. If language is considered unstable in its natural state and deconstruction is instability in its interaction with language, is this a coincidence? Again, I don’t really buy into coincidences. I do know that when instability interacts with stability the results will generally be less stability. We know this through the study of physical systems, biological systems and even social systems. We also know instability manifests in three ways: gradual change, sudden transitions and oscillations, and there is nothing to indicate that when instability is introduced to a stable system, that stable system stays the same or even stays stable. It always changes; at times, stability may eventually be achieved again but not before the system goes through a period of instability. My point is that I am not convinced that the natural state of language is instability. There is a solid case that the instability of language is due, in part, to its interaction with that which is unstable. Are you confused yet? Buckle up because this roller coaster ride is just beginning. This post is the start of a deep dive into the world of deconstructivism. Stay tuned for my next post in this series. Until then … 

Derrida, Jacques. (1988). “Derrida and difference.” (David Wood & Robert Bernaconi, Trans.). Evanston, IL: Northwestern University Press. (Original work published 1982).

Epistemology: Knowledge, Understanding or Both

Epistemology: Knowledge, Understanding or Both

Have you ever said, “I do not understand?” I am sure you have, but have you ever thought about what it means to understand? It seems so basic a concept that everyone should understand what it means to understand, but do we? Do we understand in the same way as we used to understand? Is understanding someone the same as understanding something? This post explores understanding through the lens of philosophy.  

It is fascinating to read that this concept of understanding, in philosophy, has been “sometimes prominent, sometimes neglected and sometimes viewed with suspicion,” as referenced in the Stanford Encyclopedia of Philosophy (SEP), which was my main resource for this post (Grimm, 2024). As it turns out, understanding, or as it is known in philosophical circles, epistemology, differs depending on time frame. Who knew? 

Let me start with the word “epistemology,” which was formed from the Greek word episteme, which, for centuries, was translated as knowledge, but in the last several decades “a case has been made that ‘understanding’ is the better translation” (Grimm, 2024). This is due, in part, to a change in the semantics of the word “knowledge.” That change was prompted by a shift towards observation as the primary means of obtaining knowledge, which is not so much a change in understanding as it is in the semantics of knowledge. But, should that change how we define understanding?

The SEP references theorist Julia Annas, who notes that “episteme [is] a systematic understanding of things” as opposed to merely being in possession of various bits of truth. We can know (knowledge) what molecular biology is, but that does not mean that we understand molecular biology. There is a clear difference between knowing something and understanding something, or at least there used to be. Both Plato and Aristotle, according to the SEP, considered “episteme” as an “exceptionally high-grade epistemic accomplishment”. They both viewed episteme as both knowing and understanding. The Greeks and most of the Ancients valued this dual idea of understanding and yet, according to the SEP, subtle changes in the semantics of the word took place over time, moving the semantics of episteme from knowing and understanding to just knowing, which, in my opinion, allowed observation a more prominent role regarding understanding. The question is, did observation improve our understanding of understanding? 

There are many theories on why this shift in the semantics of understanding occurred, but it did occur. My concerns do not center on the “why”, but instead, they center on the impact of this shift on present understanding. The idea of understanding went through a period in the past where its overall importance diminished and was replaced by the idea of theorizing, which is not understanding but speculation. According to the SEP, theorists throughout history have proposed various theories about understanding, and most theories did two things: they pulled us away from the original idea of understanding and pushed us towards a focus on self. It was self that was understanding’s biggest threat in the past and it is self that continues to be its biggest threat presently.

When I read that understanding was neglected in the past, I struggled to make sense of why it was neglected. Who would not want to understand? It was only when I understood that, at the time, understanding was thought to be primarily subjective and psychological, with a focus more on an understanding that was familiar, that it made more sense to me.  Familiarity is the idea of being closely acquainted with something or someone. Regarding familiarity’s impact on understanding, it pushed it towards self and away from the dual idea of knowledge and understanding. This push mutated understanding into what equates to an opinion, making it foundationally subjective, that is, until it bumped into science. In the world of science, understanding, or as it is often referenced, epistemology, was forced to move away from subjectivity and towards objectivity to interact with positivism, which was foundationally dominate in science until recently. 

According to the SEP, the notion of a subjective understanding inside epistemology was, rightfully, downplayed in the philosophy of science due, in part, to the efforts of Carl Hempel (Grimm, 2024). Hempel and others were suspicious of this “subjective sense” of understanding and its interaction with science. According to Hempel, “the goodness of an explanation” had, at best, a weak connection to understanding, especially regarding real understanding. Hempel’s point was that a good explanation might produce understanding but then again, it might not but it would still be familiar and seem like understanding. That was not objective, which was needed in science. The work of Henk de Regt made a distinction between the feeling of understanding and real understanding. He argued that “the feeling is neither necessary nor sufficient for genuine understanding.” His point, which seems straightforward, was that real understanding had little to do with feeling. Feeling is not scientific nor is it objective. It is always rooted in self, which is not understanding. 

Understanding is thought to be a deep knowledge of how things work and an ability to communicate that knowledge to others. This presented a question: what is real understanding? According to the SEP, there are multiple positions regarding this one question. It is interesting to note the presence of “luck” in positions of understanding, with one position asserting understanding as akin to full blown luck (the fully externally lucky position). This is where I defer from the SEP and dismiss the idea of luck altogether. These positions assert, in subtle ways, understanding as a pragmatic product-oriented method; all that seems to matter is that you understand, which, by all indications, would not be true for true understanding. True understanding is being able to explain to others in detail the understanding you understand. The fully external lucky position is rather pragmatic and contrary to this idea of understanding. It seems to stop at one’s understanding and does not consider that to truly understand, one must be able to pass on the understanding one understands to another. 

The contrasting position argues that one needs to understand in the “right fashion” in the right manner to understand again, and for me, the word “again” is key. In other words, understanding, to be considered as understanding, always needs to be replicated in a way that can be communicated to others so that they understand, and to do that one must understand the process every time and not just one time. The first position, for me, violates the duality of understanding and knowledge. This is important because, for me, it is the duality that completes understanding. To understand a concept, one must know what the concept is and understand how it works. The first position, the fully externally lucky position, blends knowledge and understanding into something that loses the semantics of both, pushing understanding into a pragmatic area where understanding becomes almost tangible, discounting the process in favor of it as product. This is not understanding but a lower form of knowledge. True understanding is always a process that explains how the product became, how the product works and how the products is applied. 

There are those who argue that understanding does tolerate “certain kinds of luck.” These philosophers hold positions that understanding can be “partly externally lucky.” Is it me or does luck have no place in understanding? If luck has any place in understanding, then that understanding is not understanding but a stumbled upon form of knowledge. No one stumbles onto a medical degree nor the knowledge needed for it. Most would not equate this as the proper application of their position, but understanding builds on itself, and if it does that, then, this application is not as stretched as it would seem. I believe the idea of understanding goes beyond the discussion in this post. It is an esteemed element of our humanity. It is who we are as human beings, and a large part of what makes us a human being.  

There are those—and the number grows daily—who no longer value understanding nor want to spend energy doing it. They consider it an antiquated process and no longer needed because we have technology, specifically, we have AI to do all our understanding for us, right? But do we? Does AI help us understand or does it only provide explanations? Are explanations understanding or are they something else? I believe understanding is distinctly human. I believe it is how we interact and build community. Maybe we don’t need to understand chemistry (I think there will always be a need to understand chemistry and everything else.), but we will always need to understand each other because we all are different. 

If we no longer strive to understand the things that we do not know, how will we ever understand anything or anyone? Will we even want to understand in the future if we no longer seek to understand in the present? Will we become conditioned to enjoy being isolated and introverted? That seems sad and not human. This idea of understanding is much more complex than most realize. The issue is not just one of episteme but one of humanity, at least to me it is. Think long and hard about understanding because once you lose it recovering it will not be easy. Thanks for reading! Until next time …   

Grimm, Stephen, “Understanding”, The Stanford Encyclopedia of Philosophy(Winter 2024 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/win2024/entries/understanding/&gt;.

Zetetic Philosophy: The Pursuit of Understanding

Zetetic Philosophy: The Pursuit of Understanding

I recently read an article about the pursuit of understanding related to Zetetic Philosophy. The term “zetetic” is not a term we often hear or even use, and yet it is an important one. The term is derived from the Greek word “zeteo,” which means “to search or to examine.” Zetetic Philosophy emphasizes the importance of questions and investigation over relying on preconceived notions, facts and assumptions. This sounds familiar but what many do not realize is that most philosophy today begins at a culturally- accepted position, which is preconceived. The article suggested that we should view Socrates as a Zetetic Philosopher due in part to his detailed explanation in the Republic of the ideal type of formal education. This intrigued me, but education is not the reason I read this article; understanding is. 

If you read the Republic (and I recommend that you do), you will encounter the philosopher-kings, Socrates ideal rulers. They are noble and intelligent known by their virtues who think through a certain praxis, thus the moniker philosopher-king. Socrates referred to their thinking process as the dialectic and presented it as a positive form of dialogue that incorporated “arguments in order to achieve a sure and true understanding of reality (Being).” The dialectic was a form Socrates used to test how and why things are the way they are. For Socrates, the dialectic was a method to achieve knowledge, of what he called the “Good-in-itself,” by distinguishing “the good” from everything else. Many see the dialectic as the Socratic Method. They are not one and the same but two different methods. 

The Socratic Method differed from the dialectic, in part, due to the “method of questioning,” which expressed more ignorance than understanding, which seems odd and counter intuitive. Both processed through the antithesis to confirm what is true, but only the Socratic Method embraced uncertainty as a healthy part of the process. In the Socratic Method, the teacher must hold knowledge—know something and give account of that something known—to impart knowledge or lead others in obtaining knowledge. The teacher must master both the knowledge and the method of distribution of the knowledge to move past the stage of personal ignorance to lead others to understanding. This is not a weakness of the Socratic Method but a strength. Read any of Plato’s dialogues, you will find that Socrates was this type of teacher. 

The author suggested that Socrates, as a teacher, had the following characteristics as a teacher: the desired results were met, he had the answers he sought from his students, his method unfolds in a “teleological” manner and his form of knowledge is different than the knowledge associated with the virtues he conceived. This is in stark contrast to Socrates numerous claims of ignorance, but this idea of ignorance is important or else he would not keep using it. In the Republic, Socrates denied several times that he was in possession of a certain kind of knowledge. He stated several times that he knew nothing. What is happening here? Is ignorance an important part of knowing? 

Several authors have pointed out that Socrates sought to be a co-participant in the learning process with his students, even abjuring the moniker of “teacher” as too formal to achieve equal status with his students. Was ignorance a means of this equal status? This is, in some sense, Socrates maintaining a posture of seeking and yearning for wisdom in the same manner as his students. The author implored us not to fall for Socrates trying to present himself as a radical nihilist skeptic but to look deeper, deeper into this idea of understanding as it relates to ignorance. Seeing Socrates as a zetetic philosopher is “antithetic to the philosophical ideal of the philosopher-kings of the Republic who were to lead their city-state towards that which is good and true,” or at least that was their goal?

These philosopher-kings are referred to as echonic philosophers (traditional), and Socrates never claims to be their equal. This idea of echonic philosophy, which these kings are thought to possess, is found in Book VII of the Republic and represents authenticity and proper education which was supposed to provide the possessor of both an ability to grasp what it takes to rule. Yet, the author references Socrates as a zetetic philosopher, which is a philosopher who embraces a a philosophy that is ongoing, dynamic and critical in analysis. It is one with no real answers and instead seeks to continue to inquire. Its understanding is not found in Plato’s forms but grounded in humanity and its limits and finitude. This is an important point regarding the pursuit of understanding. It is a never-ending process that is always fluid, ongoing and never ending. 

The author implies that we must learn from Socrates that real education is based on zetetic philosophy, as this is, according to Plato, a “turning around of the soul” back to itself in an enlightened state. This suggests something more about education and about understanding, especially if we look at the three moments referenced in the zetetic journey found in Plato’s Allegory of the Cave. First, there is liberation from bonds, then there is ascent upward to the light and finally there is the return to the cave. These three moments come together to fully express enlightenment or education and understanding. This idea of zetetic philosophy was thought to be that which avoided expecting absolute, irrefutable instances of truth, as if they did not exist. 

The implication is that we must recognize our ignorance and our limitations as human beings first. This is where the pursuit of understanding begins. It does not begin within the knowledge itself, but within us, recognizing first our humanness and acknowledging second our limitations. Therefore, all pursuits of understanding, as hard as this may be to understand, seem to begin within us and not within the knowledge that we seek to understand. Is this the message of Socrates? Does this make sense? I am not sure, but it does force me to do one thing … think and that is always a good thing. Until next time …   

The Rise and Fall of Western Civilization

Part I: The Development of the West

I recently read an article about the decline and fall of the West, which produced a single thought in my mind in response to this article … Are we living through what many are calling the decline of the West or has the West already fallen? These two questions produced more thoughts and prompted me to do a little reading on the subject. In several articles I read one book was referenced more than all others, The Study of History by Toynbee. It turns out that this is not just any book but, by most accounts, a masterpiece when it comes to Western Civilization. Let me explain why. 

Arnold Toynbee suggested in his book that the West was already in sharp decline. Why did he do this? The Study of History is a multi-volume study of civilization, in which Toynbee studied twenty-one different civilizations across the span of human existence and concluded that nineteen of those twenty-one collapsed when they reached the current moral state of the United States, but here was the shocking part for me: He first published The Study of History in 1931, and in 1931 he posited that the West was in sharp decline and was, according to him, “rotting from within.” Toynbee died in 1975, but I wonder what he would think of our culture today. Are we living in a culture rotting from within or is it already dead?

With this post, I begin a series on the West with the goal of answer the question, is the West in decline or has it already fallen? There are several other excellent books devoted to this topic. Oswald Spengler wrote The Decline of the West, Christopher Dawson wrote Religion and the Rise of Western Culture and Tom Holland wrote Dominion and each author grappled with the same concept regarding the decline of Western Culture. Is Western Culture dead or is it in decline? Let’s find out together. First, let’s explore how the West came to be.   

I begin with Samuel Gregg and his book, Reason, Faith and the Struggle for Western Civilization, which is also excellent when it comes to our topic. In his book, he offers his account of the West, which is like the others but also nuanced with some differences. Gregg argues that Western Civilization was conceived in a marriage of Jerusalem and Athens. His answer is like many others and yet he posits that Western Civilization was born through a marriage of “faith and philosophy” in a version of Christianity born in the West that embraced and applied both faith and reason as one. He sees this “one” coming out of ancient Judaism, which he suggests was a synthesis of both faith and reason as applied in the living of life in a new way. Life was no longer about survival, at least not in the West; there were advancements that made life better and allowed progressions in thought and religion. Gregg states that Judaismde-divinized nature” and was the first worldview/religion to completely reject the ancient idea that kings and rulers were divine and everyone else was to be under them. Judaism, unlike all other religions around it, offered the world a new king. Its rejection of the old idea was through a new view of the cosmos that was spiritually oriented. Judaism saw the cosmos as part of the created order of a universe created by a Holy God and because the universe was created by this Holy God it had order and intelligence and was not formless chaos as all others saw it. 

There was good, in time and space, and hope and all was not lost, according to Judaism, which was a much different narrative of the world than most other historical and religious narratives of the time. What Gregg was proposing was that in Judaism the Jews found a liberation of sorts of the cognitive from time and space. Judaism affirmed that there was a good God in heaven who was a Holy Creator God and that human beings were part of his created order, and not merely interchangeable parts of a larger machine. Human beings were seen as created in the image of this Creator God; they had purpose and were given responsibilities to live as moral beings in this created order. This was a radically different idea than all other ideas before it and what first makes Western Civilization unique. This was a vastly different worldview and would be distinctly western and a foundational mark of Western Civilization. 

The merging of Athens and Jerusalem cannot be underestimated as to its impact on Western Civilization and the Western mind, especially regarding our own current modern Western mindset in the United States. It is the United States that has been the pseudo-capital of Western Civilization for many years now, and it has been the United States that has served as the poster child of the West. The United States has impacted the West, including the Western mindset, more than most. And, now it is this mindset that has become compromised as referenced in part by Alan Bloom in his book, The Closing of the American Mind. It is the American mindset that was so free and so creative that now seems to be more vulnerable and more impacted than all others by the attacks against it. Bloom, in his book, attacks the moral relativism that he claimed was now in control of the colleges and universities. The very freedom brought to us by the West was the very thing being transformed before our eyes. Again, Bloom published his book in 1987, but he appeared to be saying some of the same things. The West, often seen through its colleges and universities, was in decline and dying back in 1987 according to Bloom.  

Back to Gregg, he references that Athens brought both contributions and obstacles to human thinking. It was Athens that was known for its skepticism, its irrationalities and its philosophies; most of them stood in stark contrast to the distinct and different worldview of Jerusalem (Judaism). So, how did they merge when all indications are that they should have clashed? The merging of Judaism and Greek thought, according to Gregg, predates Christianity, which is marked by the birth, death and resurrection of Jesus Christ. There can be no denying the impact of Jesus Christ on the world regardless of your belief about him. Prior to Jesus Christ, educated Jews were more than familiar with Greek thought and moved easily back and forth between Hellenistic and Jewish thinking. This was due to purely pragmatic reasons as the Romans controlled the world and therefore controlled thinking. The Romans were borrowers and refiners. They invented little of their own, but they borrowed from those they conquered and bettered what they borrowed. The Romans allowed those they conquered to keep certain elements of their own culture if they accepted the elements of the Roman culture considered important. It was the Jews who were different than all other cultures; it was the Jews who had this One God who refused to bow down to any other god. Both the Romans and the Greeks viewed the Jews as barbarians. Why? Ironically, it had little to do with their religion and more to do with their thinking and their disposition. The simple answer was that they were not Roman or Greek; the better answer would be to say that they were not Western prior to Christianity. So, there it is … a connection between Christianity and Western Civilization. In my next post, I will explore this connection, but until then …    

Critical Theory: Part V

Critical Theory: Part V

The Deconstruction and Development of a Theory That Is Critical  

I have now arrived at the point where I will pull back the layers of development regarding Critical Theory. Theory, before Critical Theory, had, as part of its composite, elements that were analytically oriented towards analysis with a distinct dialectic tendency. With this dialectic tendency, theory was considered a well-substantiated explanation of an aspect of the natural world. It required fluidity with an analytical orientation which allowed theorists, especially those in science, to make predictions based upon it being testable under controlled conditions in experiments. In the case of philosophy, theories were evaluated through principles in abductive reasoning and pushed to withstand scrutiny; they were used to test a thesis though the development of an anti-thesis, which was thought to confirm whether the original thesis was true or false. This element of theory was, for Horkheimer, problematic as the dialectic was not only rooted in science but also a benefactor of positivist protection. It was perceived as a process that revealed true scientific tendencies through objective means (positivism). Horkheimer recognized that theory, left untouched, did not have roots or tendencies towards Marxism nor would it ever have any of those tendencies unless its foundational structure changed. 

Horkheimer began the deconstruction of general theory by making a connection between theory and society, which pulled a distinctly social element into the perception of theory. He established this connection through what he called the savant or the specialist. This was an important step in the deconstruction of theory; he wrote regarding the specialist, “Particular traits in the theoretical activity of the specialist are here elevated to the rank of universal categories of instances of the world-mind, the eternal Logos.” His point was to establish that it was the individual (a member of society) that was the universal when it came to the theoretical (theory) because, according to Horkheimer, the universal was not theoretical or dialectic; it was, instead, individual genius. It was this push towards individual genius that also established theory as historical. He explained that the decisive elements of theory were nothing more than those activities of society, which are “reduced” to the theoretical through the activities of individuals in society, and in the case of theory, the activities of a specialist or of individual genius were activities of society. This tied theory to the social through the individual (either as a specialist or through individual genius), and it was the individual that rooted theory in the historical through the time and space occupied by the individual.  

In his union of theory and the individual, Horkheimer created a bridge from the theoretical to the social via the individual through individual genius, but it was the specialist whose activity he labeled as “the power of creative origination.” This activity, to a Marxist, was production, which Horkheimer labeled “creative sovereignty of thought,” which reinforced that even the individual’s thoughts were social and historical. This effectively removed scientific theory from its privileged and protected positivist (objective truth) position and reduced it to a social action. This was a line in the sand for Horkheimer … a risk he was willing to take. The risk—attacking the legitimacy of all other theories grounded in the scientific through his new “critical theory—was well worth it for him. Coming out of World War II and the oppressive reign of the Nazi war machine, he believed people were open to this radical change he proposed, especially if it “appeared” to bring back the civil liberties and the freedoms they had lost. 

For Critical Theory to live beyond its inception, it would need the idea of theory (the theoretical) to be re-cast as a different perception with a different semantical interpretation, one that embraced Critical Theory without requiring Critical Theory to embrace old ideas of theory, change to them or be compromised by them as applied by science. This new theory Horkheimer proposed had to exist as dominant while bringing change to the theories it encountered in ways that pushed them towards tendencies that were critical and Marxist, and the only way to do this was for it (Critical Theory) to be authoritative. When encountering all other theories it had to be “the” critical theory in each interaction. From my perspective, I do not believe this could have happened at any other point in history; after World War II and the Nazi regime’s widespread oppression, Horkheimer saw an opportunity and took it.  

Horkheimer, to usher in this change, pushed the theoretical to the point of instability, which produced doubt, setting it up to be re-established as “the” critical theory to remove the doubt that was now there. Theory, for Horkheimer, was now where it needed to be; it was no longer theoretical in any protected sense but instead, it was a true means of production. Its perception was now more a social function or an individual decision than anything theoretical, which made it part of production, which he labeled as a “production of unity,” which reduced production to that of a product. Horkheimer never saw production as something that produces a product; he saw production only as a product of culture, manifesting in the same ways as other cultural products. For him, it had to be a means of production that could be controlled by Marxism. If production was no longer a process of “becoming,” then it would be open to “becoming” something new, something with Marxist tendencies, especially if it was firmly entrenched in the social and the historic. As a product that was social and historic, it would now be oriented towards individual tendencies (the savant or the specialist), opening it up to cultural changes and semantical shifts with distinctly Marxist orientations. 

As a product, the process of production was now categorical, easily manipulated and positioned to be re-formed in a different light. Horkheimer’s attack comes full circle, when he wrote, “In reality, the scientific calling is only one, non-independent element in the work of historical activity of man, but in such a philosophy the former replaces the latter.” Linking theory to history allowed it to be supple in much the same way history was, which positioned theory to be pliable … more open to revisions, changes and the influence of propaganda, which would allow it to be impacted by the orientations of scholars and theorists addressing it in much the same way history was addressed. This pliability that was now attached to scientific theory was no longer fluid in any natural sense but mechanical in every aspect of its movement. Its movements were intentional, which allowed it to be manipulated through the power functions of those overseeing it. It would become dependent on individuals and their interpretations, orientations and contexts and it would no longer be dialectic. This created space for Critical Theory to move into and take over the theoretical through the individual.  

As I read Horkheimer’s essay, his attack on the theoretical was on full display; he saw all dominant theories and philosophies, as well as those objects we perceive as natural—cities, towns, fields, and woods—as bearing the marks of man and shaped by man’s oppression. They were products of society to him, the means of production and in a perfect Marxist world, equally distributed to all and not left to the bourgeois to manage and control. He was clearly now viewing theory, through a distinctly Marxist lens, as social and historical. Theory was part of society and tainted in all the same ways; Horkheimer wrote regarding society, “The existence of society has either been founded directly on oppression or been the blind outcome of conflicting forces, but in any event not the result of conscious spontaneity on the part of free individuals.” This one statement about society was also to be applied to theory, prior to his deconstruction of it. He saw society as that which was built intentionally with ill intentions. He wrote regarding this thought, “As man reflectively records reality, he separates and rejoins pieces of it, and concentrates on some particulars while failing to notice others.” Those concepts of recording, separating and rejoining are conscious intentional actions impacted by the beliefs and values of those individuals determining the recorded, separated and rejoined. Horkheimer bemoaned the intentionality of society and saw its structure as intentionally created to give the bourgeois everything at the expense of everyone else, and yet he used it to deconstruct theory and recreate it as Critical Theory.  

What Horkheimer initiated so many years ago has come to fruition. Horkheimer has essentially replaced theory with a “critical” theory that is analytically and distinctly Marxist. He took the theoretical, and its dialectic orientation and replaced its praxis with a Marxist one. The authoritative nature of theory, which has been assumed, especially in science, to possess an objective ability to confirm what is true, has now been taken over by a Marxist orientation with intentions oriented towards Marxist truisms. It is Marxist tendencies that are now dominant inside theory. They have been reconfigured to analyze other non-Marxist theories in critical ways … to cast doubt on them until they are overrun by this new configured “critical” theory. In the end, they either submit to it or die. This is Critical Theory; it was created to be “the” critical theory of all theories and to leave Marxism in a dominant position in science and ultimately in society. This is where we find our world today … right where Horkheimer and his colleagues had hoped it would be. It is Critical Theory that drives the ideas in our colleges, pushes the bills in our government and changes the norms in our culture and every idea, bill and norm has tendencies that are critical and distinctly Marxist. As we look at our culture and ask, how did we get here? There is but one answer … Critical Theory! Stay tuned for the next installment of this series. Until then, remember, thinking does matter!   

Critical Theory: Part IV

Critical Theory: Part IV

The Creation of “Critical” Theory

In my last post, I suggested that Horkheimer, to create his new “theory” had to re-create the general idea of theory itself. I posited my own assertions, which, if I am honest, are based strictly on my reading of his essay and my own convictions formed from that reading, which is my attempt to stay inside the spirit of Critical Theory. I tried to limit my secondary sources and keep his essay front and center with little to no outside interference. Right or wrong, I am left with my own assertions; whether they are corroborated by others or even valid seems to me of little consequence considering what I have read so far in his essay. 

In a world of Critical Theory, one of the first impressions that came to me was this one: there are no rules. I need only to assert my ideas in persuasive sincere ways and that should be enough, but that is problem. It should never be enough; It should require more because, like it or not, they are my perceptions and those will always be based on me. It is the same with Marxists, Idealists and Pragmatists; beliefs and values always turn into perceptions. The important point is not to assert your perceptions as true but to determine whether your perceptions are true. Let’s jump right into this post with my own assertion.

My initial assertion regarding Horkheimer’s work on theory begins with this thought: to make it “critical” would require it be fully “critical” from all angels and for all situations. The only way I see this being accomplished is if Critical Theory becomes the dominant theory over all other theories. Regardless of the validity of my claim, to answer that question I must examine, in detail, his attack on theory, or the theoretical (I reference it at times this way to clarify it from the categorical or the practical), because whether right or wrong, it reads as an attack to me. My perception is impacted by his statements; for example, he wrote, “The traditional idea of theory is based on scientific activity as carried on within the division of labor at a particular stage in the latter’s development.” Here Horkheimer seemed to direct his attack against theory by linking traditional theory directly to the action of the individual engaged in the theory, and in doing so, he presented the idea that the scientific action of a theory was no different than any “other activities of a society.” What I believe he was positing was that the individual action of employing a theory was, in substance and essence, no different than the individual action of a teacher, a coach or any other person acting according to their own convictions as a member of society; they are all social actions, which, in a subtle way, places science in a position to be overrun by Marxism. 

I do not believe his point here was to destroy theory but, to keep it in a state of flux to be used for his purposes. I believe he meant to link current theory to the individual for the purposes of giving the individual power over the theoretical process, which ultimately was a connection to the means of production via the individual. Inside Marxism, the individual and their actions would always be considered a means of production. Regarding this, he wrote, “… the real social function of science is not made manifest; it speaks not of what theory means in human life, but only what it means in the isolated sphere in which for historical reasons it comes into existence.” In this one statement, he reinforced his reduction of science to that of a social function and presented it with Marxist tendencies: as a means of production. With no pressure to defend his assertion, he proceeded to “deconstruct” out the “positivist protection” that science enjoyed, which would always present it as true and never as a means of production.   

The result of this deconstruction was a reduction of the theoretical to a position where it would meet all the requirements to be a means of production, but he also does something else that was equally important. He plants into the discussion a subtle historical reference (“historical reasons”). This historical reference completes this reduction of science, from that of a theoretical process with positivist protection (my phrase) to one that was now an action resulting from an individual choice in time and space. This is important because, as a historical reference, the reduction of science was complete, He had moved it out of the theoretical realm and placed it firmly into the social realm, where it will be at the mercy of Marxism as a means of production.

Horkheimer next took aim at society, which he viewed from a distinctly Marxist perspective. He defined society, as “the result of all the work” of all sectors of production in culture. His negative view of capitalism stemmed from his belief that it was the bourgeois in a capitalist state who would be the ones benefitting from the labor of everyone in society, which, for him, was categorically unfair and oppressive. This categorical oppressiveness, for Horkheimer, even extended to all ideas and thoughts of a society. For Horkheimer the necessity of re-establishing the conception of theory in a Marxist tradition was priority one in his development of a new “critical” theory fully capable of competing against those theories already established and dominant. For it to have a chance it needed a cultural foundation that would welcome it and allow it to grow and to have that foundation, he would have to destroy the ideas of capitalism, which operated on ideas like supply and demand and Smith’s invisible hand, which would be almost impossible to control from a Marxist position. He would borrow the radical doubt of Descartes to accomplish this task.  

Horkheimer understood that every dominant theory, once doubted, would be less dominant and more vulnerable. For Critical Theory to take hold it had to be the critical lens used in analysis of all other theories and part of that critical analysis had to be doubt, which once used in analysis, by Critical Theory was left attached to the theory analyzed. Horkheimer’s goal was that one day Critical Theory would be the dominant worldview, but the current state of science, with its theoretical roots in qualitative and quantitative methodology, would destroy Critical Theory if it were not first destroyed. For Horkheimer, this was his motivation for his attack on the concept of theory. It was also why he used the radical doubt of Descartes in his critical analysis of theory to change it and then recreate it in the image of Critical Theory.

In his essay, Horkheimer dictated how theory was to be recreated semantically and culturally to reflect Marxist beliefs, and then he labeled this theory as “critical” and used it for critical analysis over all other theories. Horkheimer hoped to accomplish two tasks with his recreation of theory: he would eliminate the original idea of theory, which was a threat to Critical Theory, and recreated it in the image of Marxism. Second, he would use this recreated and reconstructed theory as vehicle to deliver Critical Theory in ways that would assert it as a worldview and as dominant. As we look out at our world, what do we see today? We see those dominant theories of the past willfully submitting to the whims and desires of Critical Theory. 

It is Critical Theory that has become the lens of critical analysis, leading the charge to canceling dominant theories of the past and open the cultural door for new theories to come rushing in and these theories connect with no other theories. They make no sense when it comes to science or even medicine and yet, the take hold, are defended and are profoundly impact culture. We need only to look back and ask a few questions. When it comes to Critical Theory, where is dialectic thought? What about the antithesis? We see those theories of the past cast into the darkness of doubt by the shadow of Critical Theory, and they either align with or, in some cases, are replaced by Critical Theory or they fade away and die.  

This is our world today. It is a world where Critical Theory has become more dominant than we even realize, and that is by design. My next post will explore how theory became “critical” theory, with the hope of educating all of us in ways that will help us identify Critical Theory and its impact. Until then …  

Critical Theory: Part II

Making Sense of the Chaos

In the next several posts, we will look at Critical Theory through Max Horkheimer’s 1937 essay “Traditional and Critical Theory,” in which he first defines Critical Theory through the contrast of it with other traditional theories. It was Horkheimer who presented the agenda for the Frankfurt School in his 1931 lecture given upon receiving the directorship of the Institute of Social Research. In that lecture, Horkheimer proposed merging philosophy and social theory with psychology, political economy and cultural analysis in ways that developed a social philosophy capable of truly interpreting social reality, which, was and still is an important concept of Marxism, but there was more. He also implied that this process of interpretation of social reality would also create opportunities to change social reality. Why was that important? 

Social reality through a Marxist lens is always perceived to be bias (unless it is Marxist) and rooted in a class struggle between the working class and the bourgeoisie (or capitalists). Marx saw capitalism like he saw all societies of the past, rooted in slave labor. To him, they were the same; their class struggles were similar with one smaller group exploiting everyone else for their own benefit, which is why he advocated, as a true Marxist, for the common ownership of the means of production. He believed common ownership would eliminate the profit motive, which he saw as one of the main causes of class struggle. He advocated replacing it with a motive for human flourishing, but time has not been kind to Marxism. Common ownership of the means of production has been used, not to produce human flourishing, but, instead, to oppress and crush human flourishing through the vehicle of communism. It is in communism where we find much of the tyranny and the oppression in our world today, and it is also in communism where we find more suffering than flourishing. Critical Theory is not Marxism, at least it is not supposed to be. How is it different? 

Horkheimer used social sciences as the standard for Critical Theory because he found that the social sciences modeled themselves after the natural sciences, which root themselves in empirical social research. This was important for him, but there was also the issue of a distinctly positivist orientation, which he had to address (Positivism is the idea that every assertion can be scientifically or mathematically verified which justifies the rejection of metaphysics and theism.). This positivist view of the world is still how we tend to see knowledge production today; it is true if proven by quantitative research (that which roots itself in the mathematical and the scientific), which make most of us positivist-bias, trusting only science and math as the means to truth. Truth is found by both objective and subjective means. Critical Theory not only does not embrace positivist theory, but it is slowly trying to replace it with itself, which was Horkheimer’s solution to this issues.  

When we examine Critical Theory, we find that it reflects only on its own origins, which are subjective and murky at best. This is one reason why Critical Theory moves away from a positivist view of the world … Critical Theory is primarily subjective; it depends on intentionality and seeks to leave space for newly created theories and philosophies with a distinctly Critical Theory orientation. It also embraces “an interdisciplinary methodology,” as I have highlighted, that seeks to bridge the gap between research that is empirical and research that is rooted in what one author called, “the philosophical thinking needed in the correlation of history and empiricism.” That bridge between, that which is static, in the past, (history) and, that which is experienced, in the present (empirical) is one that must be traveled. For a Marxist, it is an easy journey across, but for all others, it is one full of potholes and difficulties, which is seen as opportunity by the Marxist. One theorist put it this way, “Critical theory aims not merely to describe social reality, but to generate insights into the forces of domination operating within society in a way that can inform practical action and stimulate change.” Again, we cannot forget the Marxist view of social reality and the struggle they see in it. Critical Theory seeks to determine the best ways to undermine current social reality—because they see it as oppression—to prepare it for its own Marxist reality. 

One of the first and most fundamental goals of Critical Theory is to unite theory and practice, not to discover that which is true, but to form a “dynamic unity with the oppressed class.” This unity of theory and practice in Critical Theory is not a unity as much as it is takeover. Both theory and practice must submit to the subjectivity of Critical Theory in ways that transform, allowing for the formation of a dynamic unity with the oppressed class. When this transformation takes place, research, whether quantitative or qualitative, takes a hit and becomes something it was not supposed to be … tainted with intentionality and the subjectivity of Critical Theory. For Critical Theory to be itself, it required an intentionality, rooted in its subjectivity, forcing both theory and practice to surrender to the subjectivity of Critical Theory. It is the subjectivity of Critical Theory that is king in all its encounters, and this is by design.  

Horkheimer, in the opening paragraph of his essay stated, “The real validity of the theory depends on the derived propositions being consonant with the actual facts. If experience and theory contradict each other, one of the two must be reexamined.” This reexamination was the change Critical Theory brought to theory, but it could only happen if theory had already surrendered its past. In the past, if experience and theory contradicted each other the hypothesis was incorrect. For example, if my theory is that the sun will not come up tomorrow and I wake up and run to the window and see the sun then my hypothesis is not correct, forcing me to change my hypothesis. In Critical Theory, it is the hypothesis that has taken the place of practice and theory. It is no longer the hypothesis that determines the truth of a theory; Critical Theory is the truth and all other factors, including a hypothesis, are to be its subjects.

If experience and theory contradict each other something is wrong, but what? A theory that needs adjustment to align with experience is not really a theory but merely an assembly line of options that can be adjusted to produce the desired product. The idea of experience and theory in Critical Theory is not theoretical in any sense of the word; it is instead pragmatic, practical and interchangeable. Adjustments are sought to determine how to align experience with theory to create a social philosophy capable of changing theory first, but ultimately changing society. Horkheimer’s response to the contradiction of theory and experience was that either the scientist failed to observe correctly, or the principles of the theory were wrong, but we have to ask this question: are we talking about allowing the research to speak or using the research to speak for us? 

Horkheimer quotes Husserl’s definition of a theory: “an enclosed system of propositions for science as a whole.” The basic requirement of any theoretical system ultimately is harmony, but that harmony comes at the expense of the friction before it, and yet, there are indications that Critical Theory sought to keep and maybe even use the friction of other systems to destabilize them so that Critical Theory could be the harmony that these systems sought. Horkheimer references that the basic requirement of a theoretical system is “that all parts should intermesh thoroughly and without friction,” but then, he seems to lament that this traditional conception expresses a tendency towards “a purely mathematical system of symbols.” He goes on to reference that in large areas of natural science theory the formation has become a matter of mathematical construction. Horkheimer is implying that traditional theory is stuck in the rut of describing social institutions and situations as they are. Their analyses do not incorporate the Marxist view of social reality rooted in class struggle and oppression and therefore they have little to no effect on repression and class struggle. Horkheimer sought to build Critical Theory as the opposite of traditional theories, as that with a direct effect on social reality to answer the repression and struggle he saw in society.

As I close this post, let me encourage you to come back for Part III as I attempt to add more clarity to the confusing world of Critical Theory. Until then … 

Critical Theory: Part I

Clarity for the Obscure

This post begins a series on Critical Theory as I attempt to bring a little clarity to that which is obscure, or at least seems obscure. It is always difficult to bring clarity to something that seeks to remain obscure (please note this reference). Is this the nature of Critical Theory or does it just appear this way to those of us unfamiliar with it? The conjectural nature of Critical Theory does position it to be distorted but is that distortion just part of its fabric or is it intentional? Good questions that demand answers, which is the purpose of this series. It will be a bit like nailing Jello to the wall … you will soon see what I mean. 

Let’s begin with the Stanford Encyclopedia of Philosophy, which describes Critical Theory as a phrase that “does not refer to one theory but, instead, to a family of theories” which are designed to critique society through the assimilation of chosen normative perceptions through the empirical analysis of current societal norms. I know what you are thinking … what does all of that mean? Hidden behind this loquacious description is an agenda that is intent on many things but do not miss that changing the world is one of those intents. 

Let’s begin by dissecting this murky explanation of Critical Theory provided to us. What it says to us is that Critical Theory was intentionally created to be integrated in manners that disrupt the dominant norms of society through an intentionally-created analysis to deconstruct dominant norms into fragments that can then generate a praxis of sorts which can be applied to current culture, produce norms with Marxist tendencies. Whew! I am not sure that I provide much clarity, but in short, the idea is to provide Marxism an opportunity to become a worldview that can be applied in all situations throughs ways in which it can become the dominant worldview. Again, the goal is to gain a dominant foothold in mainstream society. All references to Critical Theory (and it is always capitalized as a proper noun) are references to the work of several generations of philosophers and theorists, all with foundations in the Marxist tradition. It is truly not just one theory but many theories working together for one common goal. Clear as mud, right. Let me provide a little historical context with the hope that it adds some lucidity.  

The whole idea started with the son of Herman Weil. Herman Weil was an exporter of grain. He made a fortune exporting grain from Argentina to Europe. Felix Weil inherited his father’s fortune, but instead of using it to broaden the family business, he used it to found an institute devoted to the study of German society through a distinctly Marxist approach. Not long after the initial inception, the Institute of Social Research, as it was to be known, was formed and formally recognized by the Ministry of Education as part of the Goethe University Frankfurt. The first appointed director was Carl Grunberg (1923-29), a Marxist professor from the University of Vienna. The institute was known for its work which combined philosophy and social science, two distinct and separate fields of study at the time, in ways that were informed by Marxism. As for the term, Max Horkheimer first defined it in his essay, “Traditional and Critical Theory,” in 1937. I will be referencing and quoting from this essay in this series. 

Today, Critical Theory, is composed of many different strands of emerging forms of engagement in all areas of culture, all coming together to destabilize current dominant norms into positions of weakness. In these positions of weakness, the intent is to introduce forms of Critical Theory that eventually erode the dominant ideas and replace them with ideas rooted in and composed of Marxism. The entire process was an attempt to normalize Marxism and package it in a way that allowed it to be transformed into the norms of society. This became known as the “Frankfurt School” of critical theory, and as we will find out, they were very successful. 

This school is not really a “school” in any sense of the word but a loosely held (critical) tradition or belief system that is bonded by critiques on how to best define and develop the (critical) tradition in ways that will push it into mainstream society. Marxism’s largest deficit was thought to be its absence in mainstream society; it was thought that if it could just be applied and lived out by more people it would be embraced and change culture. The movement was meant to correct this perceived deficit through a more expansive means that would extend its roots deep into culture and provide more people the means to embrace it. The initial efforts of the (critical) tradition attempted to combine philosophy and social science into an applicable theory that would serve as a door into mainstream culture; it was created with “liberating intent” (with a goal of freeing society from the current dominant norms), but here is an important part of the application of this theory. These philosophers were patient; they understood that what they wanted to accomplish would take time. It would actually take generations of philosophers pursuing the same theories in the same manners to claim any ground in mainstream society. The first generation of these philosophers were, what has been called, “methodologically innovative” in their approach to developing this (critical) tradition. Marxism was their vehicle of change; it was also their product, which they hoped would become dominant part of society. They integrated it with the work of Sigmund Freud, Max Weber and Fredrich Nietzsche, each had made their own inroads in society, using their work in secondary ways to develop a model of critique anchored in, what is known today as, Critical Theory.

Some of the prominent first-generation philosophers were Max Horkheimer, Theodor W. Adorno, Herbert Marcuse, Walter Benjamin and Jurgen Habermas, who is still a important figure of second-generation philosophers in Critical Theory. In what is sometimes known as the third sense of Critical Theory, the work of Michel Foucault and Jacque Derrida was referenced and used to advance the tradition due to their associations with psychoanalysis and post-structuralism, with particular interest in Derrida’s theories of deconstruction. Once a workable tradition (theory) was created, it was used as a means of analysis of a wide range of phenomena—from authoritarianism to capitalism to democracy. Each analysis drew Critical Theory closer to the pillars of society ­—the family, the church and the school—and to the replacement of a moral paideia with one with Marxism foundations. Today, we see evidence of its presence in a wide range of cultural norms, including in how we live, think and act. Its influence is wide and deep and extends into many areas of current culture in such a complete way that there are elements of Critical Theory in our lives that we don’t even consider Critical Theory. 

As I close this post, my goal was to give you a macro-picture of Critical Theory. I hope you are now a little closer to understanding it than you were before you read this. In my ensuing posts, I will begin to unpack the tradition so that we not only understand it, but we can also identify it and the areas of our own lives it is impacting. This is why thinking matters to all of us.   

Existentialism: Part VI

Part VI: Living an Existential Life

With existentialism being so abstract, how does one live inside its philosophy? This is the last question I will tackle in this series in this last post. 

I begin with a quote from Le Monde, a Parisian newspaper who attempted to define existentialism in 1945. In their December edition, they admitted that “Existentialism, like faith, cannot be explained; it can only be lived.” 

A few posts back I referenced that it is indeed more a faith than a philosophy. Why is that? One of the main reasons is that it bases conduct on a belief in individual freedom more than anything else. One is free to choose one’s own conduct, but here is the difficult part, inside that freedom there is a belief that no objective moral order exists independent of the human being. It is up to each human being to create his or her own moral order by way of living it and affirming it through their own authenticity as they live. I don’t know about you, but that seems a bit daunting. 

Existentialism, you could say, is obsessed with individual authenticity—how individuals choose to live their lives. It rests on some bold ontological speculations, about what does and does not exist. One of the weightiest speculations is the belief that there is no god or entity outside of the human being; therefore, moral values do not exist outside of the human being. There are no moral absolutes nor are there universal laws or codes of ethics that apply to all of us. Values come to us as we live our lives in authentic ways. If we live our lives as if values were given to us by God or existed outside of our being, that would amount to existential sin: it would equate to living a life refusing to face the freedom you have been given to live your own authentic life, but from where does that thought come? Is it even a valid thought if it comes to us from others? You can see the dilemma we face. This individual authenticity, it is very important to the existentialist. 

Inside an existential world, every individual is responsible for deciding, on their own, how to evaluate their choices, and it is only through those individual choices given to them by the individual freedom they have that values come, but do they? An existentialist believes that it is the action rather than the principle that creates value but is the action not principled action, especially if it applies only to the individual. To value one action as more important than any other action is to prioritize it—to set it apart as an ideal, which is value, is it not? That ideal is what we strive to achieve as we live our lives. In existentialism, it is authenticity; in the Christian faith, it is the glory of God. Is there a difference? When we choose to act in a certain way, we are choosing what we think is the right as it applies to us. Inside existentialism, we are to live for ourselves; inside Christianity, we are to live for others. The only difference is the direction; in existentialism, all actions are directed inward to self, but inside Christianity all actions should be directed outward to others.

Existentialism, as we have referenced, does not believe human beings have a pre-existing nature or character, but in many ways, it instills this belief as an existing nature. We are “existentially” free to become “self-created beings” by virtue of our actions and our choices but is that not an existing nature that must take hold of us for us to live as existentially-free individuals? We are told that we possess absolute freedom … that we are free to choose and this truth is so self-evident to us, or it should be, that it never needs to be proven or argued. Again, is that not a pre-existing nature or maybe the better word is condition. 

There is acknowledgement that no one chooses who they want to be completely. Even Sartre recognized this and he also recognized that each person has a set of natural and social properties that influence who we become, which we might refer to as social conditioning. He gave them a name, “facticity.” Here is where, in my opinion, essentialism gets a little upside down. Sartre thought that one’s facticity contained properties that others could discover about us but that we would not see or acknowledge ourselves. Some examples of these are gender, weight, height, race, class and nationality. There are others but it was thought that we, as individuals, would hardly every spend time examining these ourselves and yet, today, many spend all their time lamenting them or agonizing over them. An existentialist would describe these as an objective account not capable of describing the subjective experience of what it means to be our own unique individual. As we look out at our world, what we see is the breakdown of not only society but of existential philosophy.   

Existentialism came to age between the years of 1940 – 1945, during and after WWII, which was a unique time, especially when considering the views of freedom and choice in Europe at the time. Europe, at this time, was, in my opinion, the perfect storm for existentialism to blom and grow. Its focus on individual freedom was so very appealing to those coming out of war-torn Europe who had lost all freedoms for many years. The appeal was every bit as emotional as it was intellectual. Sartre was quoted as saying, “If man is nothing but that which he makes himself then no one is bound by fate, or by forces outside their control.” He was pushing the idea that only by exercising personal freedom could people regain the civil liberties they had lost, which, was taking advantage of the situation and the state of those coming out of the war having lost everything.   

There is a problem and a price to be paid for the freedom to do whatever you want when every you want, which existentialism advocated, and that price was steep. In such a culture, everyone gets to have that same freedom, even those who oppose your right to freedom. Coming out of a war that took everyone’s freedom, individual freedom was embraced and even needed to repair and restore, but with came a burden that we are no just realizing. There is really no such thing as individual freedom unless you live alone on a remote island. Any type of freedom, especially one advocating that every choice that is ours is ours alone will eventually affect others. There is just no way around this. 

In the situation coming out of a long war, the burden was light as our individual choices were directed at restoring those individual freedoms lost, but eventually those individual freedoms would move beyond our own individual freedoms and seek other things beyond us. The desires would extend beyond what we had and seek what we were owed and what we deserved. It is in those times that this light individual burden became heavy and hard. Sartre recognized these times and presented an explanation. He said it is in these hard times that we adopt a cover of sorts to escape the pressures of choices that extend beyond us, which he called those choices “bad faith.” He said that we used “bad faith” when the pressure of choice was so overwhelming that one pretends there was no freedom after all. Sartre would say that this is a special kind of self-deception or a betrayal of who one really, but there is also evidence that this “bad faith” was a personal betrayal of existentialism. It was a desire for more … more freedom … more liberties and more rights. Sartre would claim that this “bad faith” was merely a denial of the freedom afford to us, but who will deny freedom? He claimed that one common form of deny one’s freedom was to present excuses for one’s behavior, but is not an excuse presented in a situation as a means of justifying a wrong action knowing the right one? Again, this is another sizable hole in existentialism.

As I close this series, let me summarize the main tenets of existentialism and present a few questions to consider in response to each. 

First, true existentialists believe individuals should embrace their own freedom, and that everyone has the freedom to make their own choices and these choices will and should define who we are. The problem with individual freedom, as I have referenced, is that it often comes at the expense of someone else’s freedom, unless, again, one lives as a hermit or in paralysis. The other issue of freedom is this one: There is no such thing as individual freedom. Everyone lives in some sort of community where are choices infringe upon others, which makes most of our choices not individual.  

Second, true existentialists acknowledge the absurdity of life. They believe that life is absurd and devoid of inherent meaning which, for them, prompts individuals to create their own meaning and values through their own choice, but is this absurdity pre-existing either in culture or as a thought? It is presented as ever-present which is pre-existing unless it comes from the individual living freely in a world where everyone is living their own different life, which does make absurdity a reality. My question is this, does this individual freedom contribute to the absurdity or create it?

Third, true existentialists believe in accepting responsibility for one’s own actions. They believe, and rightly so, that with freedom comes responsibility and one should own one’s decisions and the consequences that come as the result of them. They believe doing this will empower one to live authentically and with integrity, which I am in full support of living with both, but the question is will living an existential life produce both? What we have seen is that living authentically does not necessarily lead one to live with integrity, which also suggests something else is involved in life. In most cases, integrity never reveals itself in isolation as there is no opportunity to put it in practice. Most of the time we put integrity into practice in our interactions with other when we place them as more important than ourselves. How can we do that if living our best existential life is to live an authentic individually-free life?  

And, finally, true existentialists believe in living authentically at all costs. They strive to be true to themselves and to avoid conforming to cultural or societal expectations and norms. The key to authenticity to an existentialist is to understand one’s desires and values and live in accordance with them to the best of one’s ability. This is existentialism, but is it, really? As I have pointed out there are some real issues of consistency and causation that must be addressed to make sense of this world in which we live, whether we are existentialists, Christians, atheists, agnostics or aliens.  

As I close, the idea of existentialism tends to scare most when they hear the term, but the reality is that it is another philosophy trying to make sense of the world in much the same way we are. At the end of the day, I think we all want the same thing … for the world to make a little more sense to us than it did yesterday. I hope this has been a fruitful experience for those who have joined me on this journey. I hope this has pushed you think a little deeper and to spend a little more time considering different thoughts. I hope you don’t see difference as threat, but as that friend that sees the world differently than you do. You may not agree with him, but he makes you better because he pushes you to think about the things you want even stop and think about with his prodding. Difference is not something to be afraid of if you can think. This why thinking matters … always! Blessings! 

Existentialism: Part 5

Part V: The Manifestation of Existentialism and Its Miscalculation 

In my first post, I posited that we live in a culture dominated by existentialism with most of us unaware of its supremacy. I also referenced that two of the loftier goals of existentialism were personal freedom and personal responsibility, but what seems to be more prevalent in culture currently are their opposites. Very few take any kind of personal responsibility anymore, choosing instead to judge or cancel and freedom has all but disappeared, replaced by affirmation and acceptance, which have more to do with attention and recognition.

Every cultural change that has been “thrust” upon us (I use Sartre’s word intentionally.) moves us beyond original ideas, which is normal for culture, but in areas of freedom the cultural movement has been substantial in recent years. In the past, there was truthful (I hesitate to say true) freedom of speech. I may not have liked what some had to say, but I supported their right to say it and they did the same for me but that mindset has become hard to find. Say the wrong thing and risk being canceled. Post the wrong thing, even in the past, and be canceled. That is not freedom of speech; that is attacking the very idea that gave the right to hold such a view. The attacks do not come from one side but from all sides. Those on one side blame the other side and vice versa. Everyone wants to blame everyone else, but the blame is ours … all of us. Those looking to judge and cancel, do so from behind a curtain we have built and continue to support … a social media account, an obscure email or a nondescript text message. The informal restriction of freedom is here, and unless something changes, it will become formal soon. All of this, in my humble opinion, is a manifestation of existentialism’s miscalculation, which is the subject of this post. 

Existentialism’s advocacy, in my opinion, for agency and condition regarding man is not the problem. The problem, as I see it, is the failure to address human nature, which is and has been a foundational issue in philosophical circles forever. That failure has left much unexplained and wide gaps of inconsistencies, which weakens all philosophical approaches, especially existentialism. The question regarding human nature is still there, despite the effort to remove it from the conversation, and there is still many referencing its presence. Existentialism untethered man, like no other philosophical approach before it, from his religious moorings, giving him boundless freedom and power; what did he do with it? Well, to be honest, that is the issue. Nothing changed; nothing was different. Man did what he had done in the past; he is no closer to the truth than he was prior to existentialism. However, man does appear to be more broken than before, which suggests to many, whether right thinking or wrong thinking, that there is something to the issue of human nature after all, especially considering all that is new to culture.   

I think Sartre, Nietzsche, Kierkegaard, and maybe even Camus would be surprised, maybe even shocked at where we are in culture today. There would be astonishment as to why we have not evolved past crime, selfishness or deceit. If a human being does not have a pre-ordained nature, why does he keep repeating the same mistakes over and over as if he did? If we develop and create our own essence, is there no means to learn from past mistakes? I believe there would be little support from the past for the canceling of others as that betrays several foundational beliefs of existentialism, especially in the areas of personal freedom and authenticity. Sartre did acknowledge that man is conditioned by culture, but he still advocated for man to fight against this conditioning. The issue of human nature, however, is still an issue. 

Let’s look at this issue through a different angle; let’s look at it through the lie, as all of us are familiar with it and can follow its progression. If man’s nature is not bent towards the lie and instead, it is bent towards the truth or its neutral alternative, then from where does the lie arise? It cannot flow out of the nature of man if man has no preordained nature to lie, as there would be no nature from which it could flow, nor can it flow out of a neutral nature due to the constancy we see regarding the prevalence of lies. If nature in neutral we would see lies but we would not see it so widespread, seemingly in everyone. Therefore, the only other option available to us is that it must be conditioned into us through societal influences, but there are issues with that thinking as well. With no preordained nature, we are told that our essence is created and developed through our own agency. There are those who advocate lying as a means of self-perseveration or as the manifestation of confusion as one contemplates how to live in an absurd world, but both of those do not answer this question, why do small children lie? 

As the father of two children who are now grown, I distinctly remember not teaching them to lie when they were small. On the contrary, my wife and I tried very hard to teach them to tell the truth. We did not send them to a place where they were taught to lie. Everything we did was done with the goal of telling the truth. Even before they attended school, they lied. Why is that? How do we explain the lie in small children without including in our explanation an innate nature? How do we explain that we all have lied and continue to lie without including in that explanation an innate human nature predisposed to lying? I admit the issue is more convoluted than simple, but it does present a dilemma. 

I have no answers to offer other than the one we do not want to hear … All indications are that we do have a preordained nature that predisposes us to lie. I am open to other options, but for me, this option checks more boxes than any other option. This is just one issue; there are others, but they all come back to this issue of essence. If we create and develop our own essence, from where do we develop a disposition that lies and is capable of doing other more serious offences? Conservatives, liberals, atheists and even existentialists will have many fundamental disagreements on many issues, but on these issues, there is a consensus. No one endorses lying, murder, theft or any other heinous act, and yet, they continue to exist. Why? 

An existentialist would suggest, as I stated earlier, that they are the result of the problem or the confusion faced in the search for meaning in life that is, to existentialists, absurd, but that is a weak retort if, inside the same philosophy, we acknowledge the astounding ability to develop and create our own essence. Would this confusion that causes us to lie not also affect the creation and development of our essence? As you can see, there are more questions than answers, but I do believe there are enough questions to justify more discussion. I do respect the stance an existentialist takes in the complete rejection of murder on the grounds of it infringing upon another’s efforts to live an authentic life, but would that rejection not, itself, be an infringement on the murder’s life and the attempt to live it? There are difficult questions with seemingly no easy answers.  

Sartre would suggest that one’s freedom cannot place a limit on the content of choice, again, a hard stance to take in certain situations; he valued the manner of the choice more than the actual choice itself, but still the choice, according to Sartre rested completely within the individual or at least it should. Yes, existentialists believe life is absurd, but in an absurd world, there is plenty of room for order and structure, especially if creation and development contribute to both. For Sartre and other existentialists, it always came back to the idea of freedom and how it was defined. Inside existentialism freedom is always defined as an individual choice, which is confined to and owned by the individual. 

Here is my issue. Individual freedom, which is owned and confined to the individual, will move outside the individual at some point if exercised. One never exercises individual choice in a vacuum. Individual freedom that splashes over into crime will act upon another; the same can be said of individual freedom splashing into altruism. The issue in both cases is that individual freedom is no longer individual; once it is put into action in community, and it will, it moves away from the individual, interacts with others and infringes on them. Nihilism rejected the idea of morals and values for this very reason while existentialism embraces individual morals and values, presenting a dilemma. When it comes to morals and values, can they be held in isolation by the individual or does that place the individual into a kind of moral paralysis or turn the individual into a moral hermit? 

Questions like these are why thinking matters. I will have one final post on this topic. Until then … 

Part IV: Existence Precedes Essence

Part IV: Existence Precedes Essence

In the first three posts, I attempted to define existentialism through the idea of individual choice, but definitions are next to impossible when referencing anything to do with existentialism. The idea of individual choice, however, is featured prominently and pushes existentialism into another idea much more complex than any before it; it is the idea of existence preceding essence. F.W.J. Schelling was credited with being the very first to use the phrase in a speech he delivered in 1841. Soren Kierkegaard, who was in attendance of Schelling’s speech, has used this idea in some of his works, but it was Jean-Paul Sartre who formulated the idea and expanded on it. The phrase is featured prominently in a lecture of his entitled, “Existentialism is a Humanism,” which was given in 1945. The phrase is foundational to many philosophers and foundational to much of their work, especially Martin Heidegger and his metaphysics featured in his masterpiece, “Being and Time.”  

This phrase, in my opinion, captures the spirit of existentialism better than all others. Its basis flows out of a defiance to the dominant idea of the time, that our essence was more fundamental than our existence. The existential inverted phrase promotes the opposite; it presents the idea that essence, something thought to be distinctly human, is not given, as has been thought, but, instead, it is developed, which is radically different than any thinking before it. Existentialism believes that we first exist (existence) and then create and develop our own essence through our existence, i.e., our choices and our actions. Sartre believed that existence preceded essence and saw it as defining and determining our thinking. This next part is quite brilliant, in my opinion.

Sartre, instead of arguing about the true nature of man, turned the argument on its head by insisting that there is no such thing as human nature … only human condition. Sartre posited that we live as “self-conscious first-person perspectives” imagining and reimagining who we are as we live. What he was saying was that being conscious of our own existence is ultimately what it means to be human. That is our condition, which implies that our nature is neither good nor bad but a condition that is creating and developing our own essence. For Sartre, there is no pre-ordained sinful nature; each person comes into existence and then through decisions and actions creates their own unique essence. 

This issue of human nature, a philosophical battle ground for many years, was seemingly answered with this one phrase; according to Sartre, there is no predefined subject, no fixed identity and no pre-ordained path or objective, at least that was his assertion. There is only existence and all things come after it, which leaves everything in our hands as human beings. Sartre writes, “Man, first of all exists, encounters himself, surges up in the world—and defines himself afterwards.” While Sartre believed this, he also acknowledged that we face, as human beings, a number of constraints in our lives. He believed human beings had appetites and desires for power and fame, which deals directly with the nature of man whether he acknowledges it or not. He did acknowledge that pre-existing identities and meanings will be “thrust” upon us, but our role is to define ourselves and not allow them to define us. 

As I have referenced, existentialism establishes as one of its fundamental truths—if one can even use the word “truth” in reference to anything existential—that human beings are not born with a pre-defined purpose but instead forge their own path through their own human existence. I must ask this question; is that not a predefined purpose? The idea or phrase attempts to push aside any thoughts of an involved or interested deity in favor of individual human agency, which suggests that individuals are not born with or given an essence but develop it through their individual existence, which fits nicely with an evolutionary mindset. Most existentialists believe this mindset produces personal freedom and personal responsibility while acknowledging that situations and circumstances do fall outside of our control at times. We can acknowledge that existentialism produces a kind of freedom, but I am not sure we find the responsibility Sartre thought would follow. If we are now living in an existential world, what do we see? Do we see personal responsibility? Do we even see personal freedom? What exactly do we see before us because it is a manifestation of existentialism, but that is a post for another day. This post is already too long so I will take that line of thinking up next time. Until then, please remember, thinking matters! 

Existentialism: Part III

Part III: Existentialism and Pavlov’s Dogs

In my last post I referenced Pavlov’s dogs and operant conditioning. That is an incorrect reference, my apologies. The correct reference to Pavlov’s dogs is classical conditioning, which is Pavlov’s foundational theory, which involves pairing a neutral stimulus with an unconditioned stimulus to elicit a conditioned response. In his famous experiment, he found that dogs naturally salivate (a conditioned response) when presented with food (an unconditioned stimulus). What does any of this have to do with existentialism? Let me retrace my steps and bit and explain the differences between operant and classical conditioning and how both become important in existentialism.  

Operant conditioning is a learning process that uses rewards and punishments to modify voluntary behaviors. In operant conditioning behaviors that are rewarded are more likely to be repeated than those that are punished. Naturally, you want to reward wanted behavior and punish unwanted behavior. Operant conditioning is based on the work of Edward Thorndike whose law of effect theorized that behaviors arise due to whether consequences are satisfying or discomforting. Thorndike’s theories were foundational to early public education in this country and are still employed in classrooms today.   

However, operant conditioning differs from classical conditioning in that classical conditioning involves stimuli paired with biologically significant events that produce involuntary and reflexive behaviors This is much different than operant conditioning which is voluntary and depends on the consequences of a behavior whereas operant conditioning depends on the event, i.e., the reward or the punishment, more than the behavior. 

As I said, existentialism, at its core, is about the individual and the individual’s choices in life. As the world becomes more connected, we run the risk of this connectivity being used as stimuli to condition us into certain behaviors. This can only take place if we exist in a bubble where our individual choices become more important than collective choices or commonalities, which I see taking place more and more. In this world, there is no longer a need for the collective or the common because most choices and beliefs are considered acceptable by the world. 

As a matter of point, anything bringing us together actually makes conditioning, both classical and operant, more difficult. The impact is less due to observational learning, where individuals within a group learn through the observation of others within the group. Groups have norms which impact conditioning due to the natural tendency and desire of individuals to adopt the norms and behaviors of the group. In that situation, individual actions are less likely to respond to conditioning due to their natural tendency to focus on the behaviors and the norms of the group in order to be accepted by the group (There is a lot of research regarding group dynamics that supports this assertion.).  

If we live more as individuals and less as a collective group or community, there is a better chance of stimuli be used to manipulated us (either with an unconditioned stimulus or with rewards or punishments), especially if our individual choices are only about us. Eventually, when we make choices and benefit from those choices, as we will when we make choices that are only for us, we become conditioned to believe that our functional order (see my last post) is our moral order when it is not. We are merely living in the moment of our functional order which seems moral due to our actions being our actions, which are the direct result of making functional choices that only benefit us. One author put it this way, “we become ‘just’ by performing ‘just’ acts” but these just acts are merely our acts,” which we believe are just because we can perform them, and they can help us attain what we want. That is not morality. 

Our actions, which we control, root us in our own individual lived existence. Again, this does not make them moral, but because they are our actions, they do something to us. That something takes on greater significance if we live in an existential world, where the individual is the focus at the expense of the community. This isolates us and makes us sensual, pushing us toward living by feelings and comfort, which both tend to be deeply intimate and emotional, while pushing us away from any kind of dissonance, which is necessary for actual learning. In this state, we are never wrong, never challenged and never confronted with new and different ideas. We become our own god, sovereign in all things, always right and unchallenged in any way.   

Inside existentialism and its focus on choices, we slowly become conditioned to think that our individual choices determine our character and impact who we are as a human being and not the other way around. Our choices, in an ideal world, should be impacted by who we are and what we believe, which tends to be impacted by our community. A community thrives when it is diverse but united, but the only way to unite a community is with a common identify. In an existential world, there is no common identity; there is only acceptable identities, which are individual, personal and isolated. This is existence and it produces emotional reactions that divides and never unites. This pushes everyone to examine everyone else with no concern for self; it also pushes us to condemn the past, judge the present and think nothing about the future. This is existentialism and where I see us presently. This concludes this post. Stay tuned for Part IV!