home

Essays


Open as PDF


Open
Concept Clarifier in
separate window

EPISTEMOLOGY for DUMMIES

Epistemology doesn't help us know much more than we would have known if we had never heard of it.  But it does force us to admit that we don't know some of the things we thought we knew.  We study epistemology to accomplish at least five goals:

  1. to know how we know stuff
  2. to know if other people really know what they claim to know
  3. to distinguish what is knowable from what isn't
  4. to formulate an epistemological foundation that won't collapse when we try to build on it
  5. to respond to the skeptics who keep asking, "How do you know that?" and to incorrigible agnostics who claim to know little if anything
  6. and possibly a few other related goals not specifically stated above

Unfortunately the people who write on epistemology don't tell us how to accomplish those goals.  They give us a bunch of theories on how some of those goals might be accomplished.  But by admitting that those theories are theories, the writers admit that they don't know if those theories are true.  In fact, they present us with counter arguments to those theories, and counter- counter arguments for as long as we are willing to continue reading.  Thus, they don't help us know anything, except that we just wasted some time.  They do, however, help the epistemological in-crowd to sound knowledgeable among themselves, and talk over the heads of anyone who doesn't - which was very likely the whole idea.

Is there a way to accomplish our original goals without joining in the fray and becoming part of the problem?  Yes.  Not "yes, in my opinion", or "here's the yes theory," but flat unequivocal yes.  We can simply figure out what is epistemically necessary, admit that we know what is epistemically necessary, and refuse to care about those people who are either too stupid to see epistemic necessity, or too obnoxious to admit they see it.

In the first place, knowledge exists.  How do I know it exists?  For one thing, because I know I exist.  I knew I existed before I knew what knowledge was.  In fact, I still don't know what knowledge is enough to properly define it, except to say knowledge is that faculty by which a mind accepts the existence of truth, and its own ability to distinguish it from non-truth.  But that only defines the relationship of knowledge to mind.  Defining its relationship to truth is more difficult.  If I define knowledge as true belief, the definition would include instances of coincidental correctness* which are true merely by dumb luck.

*Coincidental correctness is commonly called the Gettier problem among those who think any idea, no matter how obvious, should be credited to the first Ph.D. to write on it.

And no matter how much my belief is justified, there will still be instances of coincidental correctness.  Even if a belief is true by logical necessity, unless I recognize the logical necessity, I may think I know the belief is true for some erroneous reason.  Another problem is that I can't identify instances of knowledge unless I already know what the criteria for knowledge are.  And I can't know what the criteria for knowledge are unless I can identify instances of knowledge.

Nevertheless, I know that I know some things, because I at least know I exist.  But I also know that I thought I knew some things that I didn't really know, because I've made mistakes.  A skeptic may say that if I thought I knew some things that I didn't really know, then I can't really claim to know anything.  But he's wrong, because I know I've made mistakes, and I can't possibly be wrong about the fact that I've made mistakes, because if I were wrong about it, that in itself would be a mistake.  Therefore I know at least that much because of logical necessity.

Might the skeptic counter me by asking how I know logical necessity produces knowledge?  If he does, I would ask him if he knows he just asked me a question.  If he says no, then I have no obligation to answer him.  If he says yes, then I would ask him how he knows it.  His answer, however evasive, will ultimately rest on logical necessity.  And I would continue the How do you know that? game until it becomes obvious, at which time, I would demand that he either admit it and shut the hell up, or just shut the hell up.  This is an example of what can be called an epistemically justified ultimatum.  A claimer of knowledge is under no more obligation to justify his claim, than any challenger of that claim is obligated to justify his challenge.  No one has the right to challenge any foundation that his own challenge is also based on.  It's a form of self-stultification.

Note the distinction between knowing something and having a right to claim to know something.  A hard-core agnostic may have an equal right to claim not to know certain things.  If he claims to know nothing, he can be proven wrong as soon as he makes a declarative statement.  Any statement of the form A = B is a claim to know that A = B.  If he says, "It is my opinion that A = B," he is claiming to know that it is his opinion.  If he says, "All of my statements are opinions, including this one," then if that statement is an opinion, it may be incorrect.  And if he claims to know it is correct, he has self-stultified.  Ultimately, an agnostic's behavior will prove what he knows.  If his behavior is inconsistent with his claim, then his claim is proven false.  If he refuses to acknowledge logical proof, he cannot be compelled to, but he also proves that he is not worth talking to or listening to.

Let's take a look at these two statements:
I know I exist.
I know that I know I exist.

These are actually two different kinds of knowledge.  I knew I existed before I knew how to talk.  But I didn't figure out that I knew I existed until I learned both the language necessary to express the statement, and the logic necessary to know that the statement was true.  Therefore there exists both a knowledge based on language and logic, and knowledge based on something prior to language and logic.  What could that be?  Hard wired knowledge?  Possibly programmed knowledge, if you accept the existence of a Programmer.  I don't know, and possibly can't know what that kind of knowledge is based on.  But it exists undeniably just like I do.  And I don't need to listen to any skeptic challenging either type of knowledge, because he must rely on those same types of knowledge in order to justify challenging anything.

Statement #1 above exemplifies what can be called basic or immediate knowledge, because it happens before any other knowledge.  Statement #2 can be called reflective knowledge, because it's about something.  We can also call the first kind pre-verbal knowledge, and the second propositional knowledge, because it relies on declarative statements which are generally called propositions.  We can also call the first pre-logical, and the second logical.  What we call a particular kind of knowledge can depend on the context in which we are talking about it.  Of course, we can also dogmatically insist on some traditional term depending on whatever philosophical tradition we prefer, and then hope nobody expects us to defend the entire tradition in order to justify using the term.  I prefer whatever term is most descriptive and least ambiguous in the context of however we may be discussing it at the time.  What a category is called is epistemically less important than identifying its boundaries (which illustrates the huge difference between epistemic importance and emotional importance).

Before continuing, let's examine basic knowledge.  It contains more than just the knowledge that I exist.  Descartes would insist that it started with thought, and that knowledge of our existence followed from knowledge that we think.  But knowing that I think and knowing that I exist are both in the category of basic knowledge, so in this context it doesn't matter which came first.  We also know that we experience sensory perceptions on this basic level.  We don't know that we are perceiving what we think we are perceiving, but we know that we are perceiving something.  Even if we are dreaming that we are perceiving something, we are still perceiving something.  We are, however, mistaken about where the perception is coming from.  We also know on this basic level that we emote.  If we think we are feeling a certain emotion, then we are necessarily feeling it.  We can't possibly be mistaken about that.  Though we might be mistaken about what we call the emotion - especially considering that our judgment is influenced by emotion at the time.  Look how many emotions are labeled "love".  Furthermore, we feel a wider variety of emotions than we have names for.

Within basic knowledge, various types of it can be identified.  If we think something, and we also perceive something, then somehow we know that our thought and our perception are two different things.  If we feel an emotion, we know that an emotion is a third different thing.  Our minds automatically differentiate one thing from another, if those things are different enough for a difference to be recognized.  Even within the categories of thought, perception, or emotion we make distinctions: one thought from another, one perception from another, one emotion from another.  Such differentiation is a type of basic knowledge.

However, in one sense, all knowledge is differentiation.  If I know X exists, I have differentiated X from the absence of X.  But differentiating between two existing things goes beyond just recognizing the existence or non-existence of something.  The difference is identifying existence vs. judging essence.  Differentiating between the qualities (essences) of two existing things requires more than just differentiating a thing from its absence.  Conversely, we recognize the apparent identicality of two things when no difference is apparent.  And note that we are just claiming to know the appearance of things.  We have all seen enough optical illusions to know we can't be certain of any perception beyond its appearance.  Recognition of apparent difference or identicality of essences is therefore another kind of knowledge, but still on the basic level, because it happens prior to our understanding of logic.

In fact, basic knowledge gives rise to the laws of thought on which logic is based.  I know that I exist, think, perceive, and emote.  I also know that I am what I am, I think what I think, I feel what I feel, etc.   Upon recognizing the similarity among these obviously true statements, my mind discovers a category for all of them - "A equals A" - the law of identity.  I have abstracted a universal principle out of a list of particulars.  And I have done it without even knowing what a principle or an abstraction is.  Also, without knowing what a category is, I have learned to categorize.  The other laws of thought soon follow.  I know when I am thinking, perceiving, or emoting something, and when I'm not.  Therefore I know A does not equal not-A - the law of non-contradiction.  I also know if a thought is the same or different from another thought.  Therefore I know A is either B or not-B - the law of excluded middle.  Thus the foundation of logic emerges from basic knowledge.

We have now identified two subsets of knowledge within basic knowledge:
Differentiation (which is more correctly the recognition of apparent difference or identicality of essences)
Categorization

Once we add logic, other kinds of knowledge emerge.  Differentiation carries an area of uncertainty when two similar things may or may not be distinguishable.  Applying logic to this, we have knowledge of greater or lesser similarity.  This can be called comparative knowledge, and it gives rise to knowledge of degrees, which gives rise to quantification.

And we're still not done.  Reflective knowledge requires a memory, not only to store it, but to have something to reflect on - usually.  Of course, we can have knowledge about two things simultaneously if we perceive them simultaneously, but most of our reflective knowledge requires a memory.  In order to know anything about a new piece of data, we have to compare it to another piece of data in our memory.  For example, if I receive a particular sensory impression, I know by immediate knowledge that I received it, but I need at least one other sensory impression stored in my memory in order to know if this new impression is the same or different.  And we do know know it, as long as we are talking about appearance.

That concludes the part of this essay resting on epistemic necessity, because unfortunately we don't know if the data in our memory is real or imagined.  We are almost always certain of it enough to bet our lives on it, but we don't know it by epistemic necessity.  A rigorist philosopher might insist that knowledge of remembered data should not be called knowledge at all, but even if he's perfectly correct, no one but another rigorist philosopher will give a damn.  We simply must treat remembered data as knowledge in order to operate in the world.  And if we don't operate properly, we will be unhappy.  And being happy is more important to us than being right.  If you don't believe that, figure out why you care about being right.

For the same reason, we claim to know that the external world really exists.  We don't know that for sure.  We may be dreaming, or possibly even a disembodied brain in a vat imagining our entire universe.  We may be software on some deity's hard drive.  I don't even know for sure that I'm not all that exists, and that all that I think I perceive is imaginary.  It's called solipsism.  We rarely consider these possibilities because they have nothing to offer us.  We prefer to claim to know that the external world exists because the reverse will bring unpleasant consequences.  In other words we do it for purely emotional reasons.  But if we must call something knowledge that isn't really knowledge, let's at least recognize it a lesser kind of knowledge.

Considering basic knowledge and reflective knowledge (including their subsets) together, the one thing they have in common is that we can't possibly be wrong about them.  They are epistemically necessary, so we might as well call their category epistemic knowledge.

The assumed reality of remembered and perceived experiences could be called assumed or apparent knowledge.  But those terms are not as strong as we would like.  e.g.  I like to think that I know I have two hands, because I've perceived them for as long as I can remember.  But if I am a brain in a vat, all of my perceptions are imaginary.  And if I am dreaming or hallucinating, I might be just as certain that I have three hands or no hands.  So if I want to think that I know I have two hands, I must admit that my knowledge is contingent on certain presuppositions:

1.  that the world I perceive actually exists
2.  that my perceptions come from the real world
So the next category after epistemic knowledge can be called contingent knowledge.

Though I know when I perceive something, I dare not claim that I always interpret my perceptions correctly, because I know I've made interpretive mistakes.  e.g.  the stick bending in the water.  So the more interpretation is involved in identifying the source of a perception, the less I can know I am right about it.

And unless it is given that my memory is reliable, I don't know that these things I call hands are actually called hands, or that I've ever perceived them before.  Therefore, in order to claim that I know I have two hands, I need two more presuppositions:

3.  that my memory contains remembered, and not newly created data
4.  that the data is real and not imagined

Of course, some knowledge claims require more presuppositions than others.  So the best we can do with contingent knowledge is to recognize its necessary presuppositions before claiming it.  For any true proposition there exists a set of presuppositions, which, if given, enable a person to know that proposition is true with certainty.  If any presupposition is missing from that set, then the truth of that proposition is probabilistic, even when that probability is so close to 100% as to be indistinguishable from certainty.

Our language has a problem expressing probability in terms of a percentage.  We can legitimately say "1% probable" or "99% probable," but "100% probable" means certain.  Yet "100% certainty" is redundant, because certainty has no degrees, so "99% certain" is technically incorrect.  We could call it "99% certitude," unless someone wants to quibble that certitude means the same as certainty.  Note how disgustingly imprecise our language is for describing philosophical reality!  Earthly languages were not designed for correctness; they evolved to get people what they want.  And we must all condescend to erroneous conventions in order to communicate.

So we know epistemic knowledge with 100% certainty.  We know contingent knowledge with the same degree of certainty as we know the presuppositions to which it's contingent.  We know probable knowledge with less than 100% certainty.  So it isn't even knowledge, but we usually call high probability knowledge anyway in order to function in the world.  Note that probability is a subset of contingent knowledge, because it's based on the presuppositions that probability exists and that we understand it correctly.

It has been said that the truth of a proposition can be known by the impossibility of the contrary.*  Actually this only works when the contrary is also a contradictory.

* an idea originated by Cornelius Van Til, and popularized by Walter Martin in the 1970s
 Two contrary statements can both be false; they just can't both be true.  Impossibility of the contradictory is what proves a statement true.  We know epistemic knowledge is true, because for any statement in this category, the contradictory is impossible.  We claim to know know high probability is knowledge, because for any statement in this category, the contradictory is not likely enough to be worth considering.

One could argue that knowledge based on logic does not necessarily fall into the category of impossibility of the contradictory, but merely inconceivability of the contradictory.  e.g.  Just because one can't conceive of a universe in which logic doesn't apply doesn't mean one can legitimately assert that such a universe is impossible.  Even within this universe, scientists have discovered pieces of data which appear to be in logical contradiction.  Yet one can't deny that logic-based knowledge is epistemically certain, because denial itself must be expressed logically.

New topic:  What about volition - acts of will?  Should the knowledge that I will to do something be on the same level as knowledge that I think, perceive, & emote?  It depends on whether or not volition immediately precedes action.  e.g.  If I will to pull the trigger of a gun and then immediately do it, then I knew that I willed to do it by basic knowledge.  But if I think I am about to pull the trigger, but then don't do it, then I was wrong when I believed that I willed to do it.  If I think that I will to pull the trigger a minute from now, I don't know it, because I may change my mind.  Even if I do pull the trigger a minute from now, the only time I know I am going to do it with epistemic certainty is right before I actually do it.  I may, however, know that I will to pull the trigger a minute from now with a lesser degree of certainty.

I now digress to distinguish between willing to do something and wanting to do it.  Wanting is an emotional event.  If I want to pull the trigger a minute from now, I know by basic knowledge that I want to do it.  But I don't know if I will still want to pull the trigger a minute from now.

An important question:  If I want to pull the trigger at the moment of decision, will I necessarily pull it, or is my will controlled by me rather than by my emotions?  Can I will to act contrary to my strongest emotion at the time of decision?  If I can, what is in me other than emotion to persuade me to make that decision?  If not, then I appear to be nothing more than the servant of my strongest emotions.

Of course, reason can restrain me from acting in accordance with my more basic emotions.  But how does reason do that?  It tells me that if I don't restrain my will to act in accordance with my basic emotions, I will likely reap consequences that will make me more unhappy than I would have been otherwise.  So reason appeals to greater emotional benefit in order to restrain me from acting so as to get lesser emotional benefit.  Or possibly reason tells me that a long lasting but lesser emotional benefit will be better in the long run than a greater but ephemeral emotional benefit - or possibly that delayed gratification will likely outweigh immediate gratification.  In any case, a judgment about emotional economics ultimately guides all of my actions.  It is not my strongest emotion that governs my actions, but rather my judgment about emotional consequences.

Back to epistemology.  Let's outline what we have so far.
Epistemic knowledge
Basic knowledge  (impossibility of the contradictory)
Knowledge that I exist, think, perceive, emote
Differentiation (recognition of apparent difference or identicality of essences)
Categorization
Reflective knowledge  (inconceivability of the contradictory)
Knowledge that I remember
Logical knowledge
Comparative knowledge
Knowledge of degrees
Quantification
Contingent knowledge  (based on a set of necessary presuppositions)
Note that these are not preconditions which are necessary for X to be true, but rather presuppositions which are necessary for me to know that X is true.
I know X is true if all necessary presuppositions are true.  Therefore:
If I know all presuppositions are true,
then I know X is true.
If all presuppositions are true, but I don't know it,
then X is true, but I don't know it.
If one necessary presupposition is false,
then X may or may not be true.

One important presupposition for claiming to know anything about the external world is that I am not dreaming, hallucinating, or insane.  I have no epistemic justification to claim to know any of these things.  But once we are outside the category of epistemic knowledge, other criteria for justification become available.  e.g.  When the truth or falsity of a given proposition cannot be known, and there is insufficient data to judge probabilities, and you must act as though it is true or false, then its truth or falsity can and should be judged pragmatically.

Being wrong and caring about being wrong are two different things.  If I am wrong, why should I care?  Well, because of the unpleasant consequences which are likely to come from being wrong.  But then if no unpleasant consequences are possible, I have no reason to care about being wrong.  e.g.  If I claim to know I am not dreaming, and I'm wrong, what unpleasant consequences can come of it?  Is somebody likely to accuse me of lying?  If they do, so what?  If I'm wrong, my accuser doesn't exist, except in my dream.  Should I fear dream accusers?  Unless they can somehow trap me in my dream and punish me there, I have no reason to give a damn.

Let's say I'm hallucinating or insane, and I'm on a witness stand under oath.  Let's say I claim to know that I'm not hallucinating or insane.  Is the judge, or anybody else, likely to charge me with lying?  If they do, I can beat the rap on grounds of insanity.

I can claim to know that I am not dreaming, hallucinating, or insane for more than just emotional reasons.  I have a moral and legal right to do so.  A lie is considered immoral because lies are usually told for unethical motives, and lies usually cause more harm than good.  A lie which is neither for unethical motives, nor likely to cause more harm than good is not immoral.  Therefore I have moral and legal justification to make certain epistemic claims without epistemic justification.


I don't yet have an ending for this essay.