A Deep Dive into Artificial Intelligence

A Deep Dive into Artificial Intelligence:

What is Artificial Intelligence (AI)? According to NASA, AI “refers to computer systems that can perform complex tasks normally done by human-reason, decision making, creating, etc.” NASA states that there is “no single, simple definition” regarding AI and that is because it is changing and growing constantly. 

As I speak with people on the topic, I tend to receive two responses: one of fear and one of reckless abandonment. There are those who are extremely concerned about AI and what it will do to us as human beings. Then, there are those who can’t wait to open Pandora’s Box and see all the wonderful benefits waiting to be used. 

In the little research I have done about AI, I have discovered that, in general, there are three fundamental components of all AI Systems. There is Data, which is how a system learns and makes decisions. Without large quantities of data, there are no decisions. There are Algorithms. These are sets of rules systems use to process these large quantities of data. Then, there is Computing Power. AI systems need computing resources to process these large quantities of data through their complex algorithms. As you can imagine, there are needs for large quantities of power to run these AI systems.

As far as the history of AI, the groundwork for the idea began in the early 1900s, but the largest advances are recent. Alan Turing began exploring artificial networks in the 1950s; he published a paper entitled, Computer Machinery and Intelligence, in which he proposed a test of machine intelligence. He called this test the Imitation Game, which eventually became the Turing Test. This was a watershed moment as AL technology began to develop rapidly after this point.

Computer development began with increasing processing speeds in the 70s and 80s, producing faster, cheaper more accessible computers. During this time, the very first AI language was created, but computers were still too weak to demonstrate any kind of intelligence. The 80s were a time of growth and of increased interest in AI, and this was due, in part, to breakthroughs in research, which increased funding opportunities. The 90s produced the first functioning AI systems: the first AI to defeat a world champion chess player, AI robots, AI self-cleaning vacuums, and AI speech recognition software. In the late 1990s and 2000s, there were significant advances in AI. Automation and machine learning were used to solve problems in academia as well as in the real world, which brings us today.

There are AI systems all around us and their use continues to increase daily. Ai is used in law, medicine, education, engineering, science and more. There are enormous benefits to its use. It can solve problems and diagnosis diseases, but like anything else, with the benefits come the detriments. There are detriments, even though I have spoken to several who see none. I have my own concerns, but for today I will just address one: entropy. 

AI systems are created with entropy in mind, but it is the entropy found in thermodynamics. The second law of thermodynamics states that the entropy of an isolated system can only increase or remain constant; it never decreases. As great as AI is, it is still a created system, and it still must deal with entropy. I tend to look at entropy and its relationship with AI from the perspective of physics, which indicates that the tendency of systems is to move towards a greater state of disorder and randomness and not away from it. If I am right to look at entropy’s relationship with AI this way, what does it say about AI’s future? Is it endless? Is it immune from entropy? 

As AI becomes more a part of its own data, and by data, I mean that content which it creates that is added to the data to which it has access, what will happen to its state of entropy? Will it decrease or increase? I believe it will most certainly increase. I do see, in the distant future, an ancestral relationship with its data when its data base moves past a 50% bifurcation point. What I mean by this is that at some point in the future I see AI creating so much data that become part of the data base ( the internet) that it begins to use its own created data to make decisions. Will this matter?

I do think this will matter. What will it do to its ability to think and reason? Here is a harder question, will it be too late? What I mean by that question is will it be too late for us at this future date due to our conditioning and dependence on AI systems? If, at a future date, this ancestral state of entropy is reached and it results in AI systems suddenly providing some false information or some untrue truth, will we be able to recognize this information as false or will we be too far gone? There are hard questions not being discussed regarding AI that need to be discussed. Will we take the time to discuss them or are we too in a hurry to usher in AI as the solution to all our problems. When that day comes, AI will be the least of our worries. Until next time …  

Deconstructing Deconstructivism: Part I

Deconstructing Deconstructivism.

Deconstructivism, another theory critically aimed at the norms of culture, is a theory that has impacted all of us and yet, most of us have never heard of it. To understand it is, at best, to attempt to understand it because, be warned, it is ambiguous and vague. It is like nailing Jello to the wall; once you think you understand it, it interacts with something else and changes. Deconstructivism is change and difference and criticism and tension all rolled up into what I see as varied disparity. It first appeared in a 1967 book entitled, On Grammatology, and has grown in reference and documentation ever since.

Let’s begin with a quote. Jacques Derrida, in his article, Letter to a Japanese Friend, explained deconstructivism to his friend by insisting that it is “an event that does not await the deliberation, consciousness or organization of a subject or even modernity” (Derrida, 1985, p.2). If this seems a rather odd way to describe an event, you are right, but it is not an event that he is describing; it is deconstructivism. Inside that seemingly innocuous description is affirmation to deconstructivism’s metaphysical reality. Derrida stated to his friend (Professor Izutsu) that to define it or even translate the word “deconstruction” would take away from it, which is a suggestion to its nature and to its protection. How does one disagree with that which cannot be defined or translated? The answer is simple: one does not because one cannot.

Derrida, in my opinion, was stating that deconstruction was a notion of a reality rooted in situational agency. It was designed to avoid the confined corner; to avoid the proverbial box or the closed door and to assert its own agency in interaction with individualism (or context) as a means of truth. According to Derrida, it was and is a critical methodology that analyzes how meaning is organically constructed and deconstructed within language, which we all understand to be the primary means of communication between human beings, and yet it is not language that seems to be under attack. Instead, language seems, to me, to be the vehicle of delivery for deconstructionism.

Let’s be clear; deconstructivism is not a form of Marxism nor of Critical Theory, but it is related to both, although indirectly. The process itself claims to reveal the instability of language, which it presents as language’s true and natural state. Is language unstable or is language, as my cynical mind suspects, being pushed to instability by deconstructionism? I would like to posit a question: if instability were not language’s true and natural state, could deconstructivism determine language’s state or would it, instead, change its state? I am not sure, but I look forward to exploring that possibility and others. I do know this; its existence depends on instability of language and meaning.  

Back to this question, is language unstable? Yes and no! I think language is like anything else; it works from instability to stability. I know I seek clarity in my communication and one of the ways I do that is to ensure that meaning is consistent with those with which I am communicating. How is language unstable? I believe language is unstable if meaning inside language is unstable. How does that instability remain and not work towards stability? One of the methods of maintaining instability is through addition. When other meanings are added to true meaning, clarity is not produced but instead, instability is maintained.  Addition, for me, creates instability, especially when it comes to language. If we have found instability as a state of language and this discovery was the direct result of deconstructivism’s interaction with language, then, there is another more difficult question to consider. Is the instability of language its true and natural state or is it a direct result of deconstructivism’s interaction with language? 

Deconstructivism claims that one of its goals is to push meaning to its “natural” limits and expose its “true” nature which, according to Derrida, is instability heavily dependent on difference (addition). I am not a big fan of coincidences and see them as problematic. Here is my issue. If language is considered unstable in its natural state and deconstruction is instability in its interaction with language, is this a coincidence? Again, I don’t really buy into coincidences. I do know that when instability interacts with stability the results will generally be less stability. We know this through the study of physical systems, biological systems and even social systems. We also know instability manifests in three ways: gradual change, sudden transitions and oscillations, and there is nothing to indicate that when instability is introduced to a stable system, that stable system stays the same or even stays stable. It always changes; at times, stability may eventually be achieved again but not before the system goes through a period of instability. My point is that I am not convinced that the natural state of language is instability. There is a solid case that the instability of language is due, in part, to its interaction with that which is unstable. Are you confused yet? Buckle up because this roller coaster ride is just beginning. This post is the start of a deep dive into the world of deconstructivism. Stay tuned for my next post in this series. Until then … 

Derrida, Jacques. (1988). “Derrida and difference.” (David Wood & Robert Bernaconi, Trans.). Evanston, IL: Northwestern University Press. (Original work published 1982).

Epistemology: Knowledge, Understanding or Both

Epistemology: Knowledge, Understanding or Both

Have you ever said, “I do not understand?” I am sure you have, but have you ever thought about what it means to understand? It seems so basic a concept that everyone should understand what it means to understand, but do we? Do we understand in the same way as we used to understand? Is understanding someone the same as understanding something? This post explores understanding through the lens of philosophy.  

It is fascinating to read that this concept of understanding, in philosophy, has been “sometimes prominent, sometimes neglected and sometimes viewed with suspicion,” as referenced in the Stanford Encyclopedia of Philosophy (SEP), which was my main resource for this post (Grimm, 2024). As it turns out, understanding, or as it is known in philosophical circles, epistemology, differs depending on time frame. Who knew? 

Let me start with the word “epistemology,” which was formed from the Greek word episteme, which, for centuries, was translated as knowledge, but in the last several decades “a case has been made that ‘understanding’ is the better translation” (Grimm, 2024). This is due, in part, to a change in the semantics of the word “knowledge.” That change was prompted by a shift towards observation as the primary means of obtaining knowledge, which is not so much a change in understanding as it is in the semantics of knowledge. But, should that change how we define understanding?

The SEP references theorist Julia Annas, who notes that “episteme [is] a systematic understanding of things” as opposed to merely being in possession of various bits of truth. We can know (knowledge) what molecular biology is, but that does not mean that we understand molecular biology. There is a clear difference between knowing something and understanding something, or at least there used to be. Both Plato and Aristotle, according to the SEP, considered “episteme” as an “exceptionally high-grade epistemic accomplishment”. They both viewed episteme as both knowing and understanding. The Greeks and most of the Ancients valued this dual idea of understanding and yet, according to the SEP, subtle changes in the semantics of the word took place over time, moving the semantics of episteme from knowing and understanding to just knowing, which, in my opinion, allowed observation a more prominent role regarding understanding. The question is, did observation improve our understanding of understanding? 

There are many theories on why this shift in the semantics of understanding occurred, but it did occur. My concerns do not center on the “why”, but instead, they center on the impact of this shift on present understanding. The idea of understanding went through a period in the past where its overall importance diminished and was replaced by the idea of theorizing, which is not understanding but speculation. According to the SEP, theorists throughout history have proposed various theories about understanding, and most theories did two things: they pulled us away from the original idea of understanding and pushed us towards a focus on self. It was self that was understanding’s biggest threat in the past and it is self that continues to be its biggest threat presently.

When I read that understanding was neglected in the past, I struggled to make sense of why it was neglected. Who would not want to understand? It was only when I understood that, at the time, understanding was thought to be primarily subjective and psychological, with a focus more on an understanding that was familiar, that it made more sense to me.  Familiarity is the idea of being closely acquainted with something or someone. Regarding familiarity’s impact on understanding, it pushed it towards self and away from the dual idea of knowledge and understanding. This push mutated understanding into what equates to an opinion, making it foundationally subjective, that is, until it bumped into science. In the world of science, understanding, or as it is often referenced, epistemology, was forced to move away from subjectivity and towards objectivity to interact with positivism, which was foundationally dominate in science until recently. 

According to the SEP, the notion of a subjective understanding inside epistemology was, rightfully, downplayed in the philosophy of science due, in part, to the efforts of Carl Hempel (Grimm, 2024). Hempel and others were suspicious of this “subjective sense” of understanding and its interaction with science. According to Hempel, “the goodness of an explanation” had, at best, a weak connection to understanding, especially regarding real understanding. Hempel’s point was that a good explanation might produce understanding but then again, it might not but it would still be familiar and seem like understanding. That was not objective, which was needed in science. The work of Henk de Regt made a distinction between the feeling of understanding and real understanding. He argued that “the feeling is neither necessary nor sufficient for genuine understanding.” His point, which seems straightforward, was that real understanding had little to do with feeling. Feeling is not scientific nor is it objective. It is always rooted in self, which is not understanding. 

Understanding is thought to be a deep knowledge of how things work and an ability to communicate that knowledge to others. This presented a question: what is real understanding? According to the SEP, there are multiple positions regarding this one question. It is interesting to note the presence of “luck” in positions of understanding, with one position asserting understanding as akin to full blown luck (the fully externally lucky position). This is where I defer from the SEP and dismiss the idea of luck altogether. These positions assert, in subtle ways, understanding as a pragmatic product-oriented method; all that seems to matter is that you understand, which, by all indications, would not be true for true understanding. True understanding is being able to explain to others in detail the understanding you understand. The fully external lucky position is rather pragmatic and contrary to this idea of understanding. It seems to stop at one’s understanding and does not consider that to truly understand, one must be able to pass on the understanding one understands to another. 

The contrasting position argues that one needs to understand in the “right fashion” in the right manner to understand again, and for me, the word “again” is key. In other words, understanding, to be considered as understanding, always needs to be replicated in a way that can be communicated to others so that they understand, and to do that one must understand the process every time and not just one time. The first position, for me, violates the duality of understanding and knowledge. This is important because, for me, it is the duality that completes understanding. To understand a concept, one must know what the concept is and understand how it works. The first position, the fully externally lucky position, blends knowledge and understanding into something that loses the semantics of both, pushing understanding into a pragmatic area where understanding becomes almost tangible, discounting the process in favor of it as product. This is not understanding but a lower form of knowledge. True understanding is always a process that explains how the product became, how the product works and how the products is applied. 

There are those who argue that understanding does tolerate “certain kinds of luck.” These philosophers hold positions that understanding can be “partly externally lucky.” Is it me or does luck have no place in understanding? If luck has any place in understanding, then that understanding is not understanding but a stumbled upon form of knowledge. No one stumbles onto a medical degree nor the knowledge needed for it. Most would not equate this as the proper application of their position, but understanding builds on itself, and if it does that, then, this application is not as stretched as it would seem. I believe the idea of understanding goes beyond the discussion in this post. It is an esteemed element of our humanity. It is who we are as human beings, and a large part of what makes us a human being.  

There are those—and the number grows daily—who no longer value understanding nor want to spend energy doing it. They consider it an antiquated process and no longer needed because we have technology, specifically, we have AI to do all our understanding for us, right? But do we? Does AI help us understand or does it only provide explanations? Are explanations understanding or are they something else? I believe understanding is distinctly human. I believe it is how we interact and build community. Maybe we don’t need to understand chemistry (I think there will always be a need to understand chemistry and everything else.), but we will always need to understand each other because we all are different. 

If we no longer strive to understand the things that we do not know, how will we ever understand anything or anyone? Will we even want to understand in the future if we no longer seek to understand in the present? Will we become conditioned to enjoy being isolated and introverted? That seems sad and not human. This idea of understanding is much more complex than most realize. The issue is not just one of episteme but one of humanity, at least to me it is. Think long and hard about understanding because once you lose it recovering it will not be easy. Thanks for reading! Until next time …   

Grimm, Stephen, “Understanding”, The Stanford Encyclopedia of Philosophy(Winter 2024 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/win2024/entries/understanding/&gt;.

The Rise and Fall of Western Civilization: Part III

The Roman siege of Jerusalem in 70 C.E.

Part III: The Beginning of the End

Many consider the “West” a nebulous term with no meaning and no history and yet most consider it in decline. As I have referenced, when Oswald Spengler published his epic, The Decline of the West, he posited that the West “wasn’t just in decline; it was being dragged under.” His thesis was that all “cultures” go through a process of birth, blossoming, fruit production and withering to the point of death. The withering phase he called “civilization” because he associated it with a process within the withering phase of excess, debilitation, loss of identity and finally, death. Spengler first published his masterpiece in 1918, and at that time, he saw the West in the withering stage. As he pointed out, the beginning of the withering stage is excess. When civilizations reach the point of excess they become fat; that is not a point of celebration but one of warning. Spengler saw the West at this stage, which forces us to consider a question we would rather not: where is the West now? 

Let’s be clear: the West is not a country, nor does it have geographical boundaries, but it does have a birth, and because it has a birth, it will ultimately have a death. Its birth, according to Spengler, occurred with the fusion of German nobility and the Western Roman Empire, as Spengler saw his native Germany as part of the West. Others point to the marriage of Athens and Jerusalem, but all are references to the merging of the two known worlds at the time into something new and different. Spengler thought the West “blossomed” in the Italian Renaissance, bloomed in the Baroque period and produced its greatest fruit in the 19th century. Gregg posited that the Enlightenment was one example of its fruit, but fruit is only good for a time; eventually it rots.

The Enlightenment, most would say, was not united with Christianity but instead at odds with it. Gregg rejects that idea and any idea that the Enlightenment advanced individual reason at the expense of personal faith. He acknowledges the rise of and focus on reason, but he also points to examples of reason and faith coming together for good during the Enlightenment. He presents one important Enlightenment figure in support of his supposition: Sir Isaac Newton. It was thought that Newton wrote his Principia Mathematica in response to the “materialist assumptions” of Rene Descartes and his views on planetary movements. Newton believed that the entire cosmos, including planetary movement, were governed by a Holy God and his divine providence. It was his faith that drove him to study the world and understand it. Many Enlightenment thinkers considered religion as superstition, but others, like Newton, did not.

As far as products of the Enlightenment, the founding of America is often referenced as one of its greatest. While there is evidence to support this assertion, there is also evidence, i.e., its foundational documents, that tell another story; one where its founders grounded virtue and human morality in reason bathed in a belief of divine goodness. Those Enlightenment ideas that were at odds with the Christian faith coincide with the rise of reductionism and the scientific method as both were coming of age at this time. It was reductionism and modern science that attacked faith, presenting it as incompatible with reason, for the purpose of crowning reason as the only king.   

According to Gregg, there were two claims that severed the reason of Enlightenment with the Christian faith; the first was the belief that there was no fixed human nature, which clashed directly with the Christian belief of a sinful human nature. The second claim—that the only true knowledge was scientific knowledge from the scientific method—contradicted the Christian belief that all knowledge belonged to a Holy God. Gregg argues that both claims isolate science away from faith and subvert all belief in God. Science and faith were presented as mutually exclusive with science celebrated and faith mocked, but, quite unintentionally, the position science claimed and occupied alone would eventually subvert science and reason. We only need to look at current culture and the presence of Critical Theory as proof. It cares nothing for science or reason; it only cares for itself. There is no logic or scientific methodology; it alone is king and ruler. I would like to posit one notion to consider from this point forward: As the Enlightenment was attacking the Christian faith, it was also attacking itself; it just did not know it. 

The ideas and principles it deployed eventually came full circle and were deployed against it. Reason, the scientific method and humanism, all used by the Enlightenment to directly benefit itself, were critiqued, undermined and turned against it by other movements like Romanticism, Idealism, Rationalism and Postmodernism. They revealed that the limitations and exclusions the Enlightenment sought to eliminate from the world were alive and well inside its own ideas, in part, due to its own nature. It is this nature that was, in my opinion, adopted, manipulated and used by Critical Theory to assert itself in the West as the new authority. It is Critical Theory that now pushes the West to the brink of decline and death.  

Stay tuned for the last post in this series as I discuss where the West is now. Until then, remember thinking matters!  

Critical Theory: Part III

Critical Theory: Part III

The Crack in the Door 

When examining Critical Theory through the eyes of Max Horkheimer, we can see a bit of what makes it unique and different. Let me begin with a quote from Horkheimer; early in his essay, he wrote, “There is always on one hand, the conceptually formational knowledge and on the other hand, the facts to be subsumed under it. Such a subsumption or establishing a relation between the simple perception or verification of a fact and the conceptual structure of our knowing is called its theoretical explanation.” Here Horkheimer began to subtly push a shift in our knowing, pushing it into a realm that was almost that of ascendency where facts are now subsumed to our knowing, which is a distinctly Marxist tendency, but it was also very much an attack on current knowledge, albeit subtle. This is an important distinction to remember as we move forward. 

Horkheimer unpacked the idea of theory early in his essay, starting with a statement regarding the essence of theory; he wrote, “What scientists in various fields regard as the essence of the theory thus corresponds, in fact, to the immediate tasks they set for themselves.” This one statement, in my opinion, supported his earlier assertion regarding our knowing; that it is powerful, dominant and impactful on theory. He goes on to use words like “manipulation” and “supplied” to reinforce his concept of theory over one that so many have held as the standard and the determiner of our knowing. He was asserting that it was nothing more than an intentional task of an individual, inside their own individual essence, which was the application of his new assertion. Because he believed knowing was rooted in our own ascendency, he also believed it was nothing more than an individual choice, which established the concept of theory as he needed it to be presented … as personal, individual and rooted in historical and social condition. He wrote of “the manipulation of the physical nature” in the context of the “amassing of a body of knowledge” such as was supplied in an ordered set of hypotheses to imply, again, that there was individual intentionality to the idea of a theory. 

Horkheimer wrote of the influence of the subject matter on a theory and vice versa, calling the process “not only an intrascientific process but a social one as well.” It was important for him to defend his assertion of ascendency as his assertion was also an attack on the social dominant thinking of the day, which he identified earlier as theological in nature. The essay, at this point, reads as a scanning of the landscape, in some respects, of the theoretical in search of useful tools to develop and apply in ways that created and developed a distinctly Marxist ascendency at the expense of the current dominant theological one. We can identify the tools he found useful through the points of his deconstruction.

One such example was his references to the Positivists and to the Pragmatists. He brought attention to the fact that both had similar connections between the theoretical and the social, which he questioned as to whether either one was useful or even scientific. He pointed to the scientific task, and to the scientific community and to their own general call in such situations as a call that was a “sense of practical purpose” and a “belief in social value” when his assertion was that they were both nothing more than “a personal conviction”. This is a primary example of Horkheimer taking both concepts, deconstructing them to the point of doubt in their current states, with the purpose of reconstructing them later inside Critical Theory for beneficial purposes of enhancing and supporting Critical Theory. 

The dominant idea of theory, to Horkheimer, was always under his attack. He took the current idea of theory found in science and employed his deconstruction/reconstruction dichotomy to develop the future one he intended to employ. He began by acknowledging that scientists in various fields regard their own tasks as the essence of theory, and the key word here was “essence,” which he already established as that which was rooted in man. This idea of essence destabilized general scientific theory, pushing it away from the theoretical and towards the practical, pragmatic and even personal; it was the personal that destabilized theory to the point of existence. Once theory is personal it is no longer theoretical but instead opinion or perception, which positions it to be developed into something completely different. This was the brilliance of Horkheimer on display in Critical Theory.  

The conception of theory, for Horkheimer, was “grounded in the inner nature of knowledge” for the purpose of establishing it as historical, which, again, pushed it completely away from the theoretical, reducing it all the way down past the personal to an “ideological category.” Once it was categorical it become vulnerable to manipulation and in a state of readiness, for the purposes and intentions of the manipulator. Horkheimer wrote, “That new views in fact win out is due to concrete historical circumstances, even if the scientist himself may be determined to change his views only by immanent motives.” He goes on to note that concrete historical circumstances, while important, inside science tend to succumb to “genius and accident.” Here, again, is the brilliance of his deconstructive process on display as he casted doubt into the very essence of theory as it was currently known and employed. 

As Horkheimer explained why social conditions are not considered as much as other factors, he made an interesting reference. He wrote, “The theoreticians of knowledge usually rely here on a concept of theology which only in appearance is immanent to their science.” That one statement was followed by a reference to new definitions and how they are drawn, depending on directions and goals of research. This was the process of Critical Theory and the beginning of the end of general scientific theory. It would never again be as dominant despite the efforts of many. It took many years and consistent and continuous pounding away at the foundations by generations of Critical Theorists, but here we sit in the place that Horkheimer imagined many years ago. Critical Theory has become that which is imposing its will on all other theories.  

As we read Horkheimer, we are experiencing his deconstruction/reconstruction process, as it was used on the concept of theory inside science. It was this deconstruction of theory that was the crack in the door, so to speak, that opened for Critical Theory to come barging into culture. It is through Critical Theory that so many other different theories entered our culture and pushed their way into battling the norms of culture, but each would have never been granted access under the old concept of theory. The idea of the theoretical had to be destroyed for Critical Theory to take hold due to its subjective nature and its Marxist tendencies.

This concludes this post. Stay tune as I unpack more of Horkheimer’s essay with the hope that it will help us understand more about our world and the impact of Critical Theory on it. Until then … 

Do We Still Have Common Sense?

Common Sense sign card

The other day, in the middle of a conversation, the idea of common sense was presented as something all but gone in our culture. The subject came and went too quickly. It was only after, upon reflection of the conversation, that it came to my mind, and I couldn’t dismiss it. It stayed with me, prompting me to do a little digging as to its origins and to its current reality.

Let’s establish, first, that common sense is not a liberal or a conservative mindset. It is not a particular worldview or political position. I think many of us look at the absence of common sense as positional; to have it one must hold a certain position, usually a position that aligns with our position. That is not common sense.  

The origin of the phrase is found with a school of philosophy, which is said to hold the notion that we should begin our thinking with the fixed beliefs of mankind and move on from there. This phrase or notion, whatever you want to call it, was first penned by Aristotle who believed that all living beings have nourishing souls, but it was only human beings who possessed a rational soul. He believed it was only this rational soul that perceived. Aristotle proposed that every act of perception involved a modification of one of the five senses that then interacted with one’s entire being, when engaged with one of the fixed beliefs associated with all human beings.

Aristotle saw one’s perception as provinces of sensation and believed that human beings perceive by means of difference between the polar extremes contained within each sense. For example, he saw these provinces of sensation as a “kind of mean” between two extremes as in the difference between soft and loud in sound or bitter and sweet in taste. His inference was that human beings perceive by means of difference, but he believed that one sense cannot perceive itself. According to a host of theorists, Aristotle speculated that there must be an additional sense or a “common sense” that coordinated the other senses. He suggested that this “common sense” instituted a perception that is common to all the other senses yet one condensable to none of them. 

Most theorists agree that this common sense, referenced by Aristotle, was not a sixth sense or an additional sense; instead, it was more a sense of difference or a unity of the senses that manifests together when considering something of significance, a fixed belief, if you will, engaging all five senses, which in turn act collectively on one’s being. 

Mention common sense today and most default to the ideas of practical judgement and social awareness as both relate to an individual being living in a world with all beings, but there is a deeper implication … the one with which we started. Do most still have common sense? Or is there still a need for common sense? Both questions have implications socially and culturally. 

First, are there any commonly accepted fixed beliefs to which almost everyone, even in their differences, agree or acknowledge? It is thought that agreement or acknowledgement of these fixed beliefs manifest common sense but if there are a dwindling number of fixed beliefs … what happens to common sense? I am proposing that culturally there is indeed a diminishing number of commonly accepted fixed beliefs but that is due to all individual beliefs being given positions of acceptability. The question not yet answered is this one: does the acceptance of all individual beliefs still produce common sense in the same way a communal acceptance of a fixed belief did in the past?  

When was the last time you heard common sense referenced? I can’t say that I have heard the phrase in quite some time. As I look out at our world, I see an absence of common sense but does anyone else? Common sense seems, to me, to be an individual trait produced by communal membership. Does the absence of common sense signal an absence of community or an absence of something else? I am thinking of submission or empathy, two areas I see less of these days. 

The idea of common sense was the sense that kept you “in the middle of the road,” if you will, kept you connected with all others with your differences intact. It was this “common” amid all your differences that you shared with your fellow men and women in ways of connect-ability. It connected you with others and allowed you to keep your differences while connecting with others who were themselves different. It was “common sense” that tolerated individual differences for the sake of the collective whole. Over time, some individual differences became acceptable to our collective common sense, but what happens when all differences are given equal status of acceptability? Well, first, we lose the need for common sense, and second, I am not entirely sure, but my sense is that we lose something important … something communal … something distinctly human.    

I would love to hear your thoughts? Hit the comment section with them because thinking matters!