Sunday, January 11, 2015

Scientific Laws and Theories

To understand the difference between a scientific law and a scientific theory you need to have some understanding of the history of modern European science. It all kind of starts with the European Renaissance of the 14th through 17th centuries. As far as I can tell, it started in the arts and included people like Leonardo da Vinci and Dante. It got a boost and successfully spread because of the invention of movable type. Renaissance means rebirth. If we think of the period before the Renaissance as the Dark Ages, you can see why they called it a rebirth. Humanists of the Renaissance started studying the classic thinkers and started questioning "the way it has always been." The Renaissance, along with political changes, allowed the Protestant Reformation to gain a foothold. For our purposes, the main change, at least at first, was not a questioning of faith, but a questioning of the world view that had been passed down. That meant Aristotle. Aristotle was a great thinker and had developed many advances in physics and biology. His thinking was then adopted by the church and codified into church doctrine by Aquinas and others. Most of the great scientific thinkers of the time were Christians: Copernicus, Galileo, Newton, Descartes, etc. In fact, in his Meditations, Descartes offers a proof for the existence of God.

Two Books: Two Laws
Because these scientists where Christians they thought that the world was created by God. When they discovered how nature operated they thought that they were simply revealing the rules that God had set in place. The world was put together like a giant clockwork and it was the scientist's goal to figure out how this clock worked. Today the clock has been replaced by the computer as our inspiration, but back then it was the mechanical clock with all it's gears that dominated. The other dominant inspiration for Christians at that time was the Bible. The Bible was God's law. Since the 14th century theologians in the Catholic Church had been referring to the Book of Nature as God's other book. These two books were 'written' by God and represented two ways in which God's Law could be known. So it was natural, we might way, for Christians to see and label the things they were discovering through the scientific method as Laws. Newton's Law of Universal Gravitation is a good example. Newton did not create gravity, he simply discovered something that God had written into the way the world operated.

Laws do not exist unless there is a law giver. And since God had created the world and the rules that govern the world, it was natural to think of those regularities as laws of nature, because they were laws of the Supreme Law Giver. The two books theory of God's law had made its way into the language of the scientific revolution.

Two Things Happened
From this point on in the story two things happen which change the way the scientific community now thinks about nature. The first is that more and more scientists stop believing in God and so they no longer think of the regularities of nature as created by God. The idea of a law doesn't make as much sense without a law giver.

The second thing that happened is that some of these so-called laws were found to be wrong. Scientists became much more aware of the provisional nature of their discoveries. This brought about a more humble attitude in the scientific community. Even something as sure as Newton's Law of Universal Gravitation has been replaced with Einstein's Theory of Relativity.

Theory
Which brings us to the rise of scientific theory. In most academic disciplines, and even in less academic practices such as business management, there develops a specialized vocabulary. Terms that are used in everyday language are given special meanings that no longer communicate the same thing they do for the lay person. Philosophers are especially notorious for doing this. We call it jargon. So we have to be careful when we use specialized terms and take them out of context. For the lay person a theory is a guess. I have a theory about what my daughter has been doing when she stays out late at night. My theory is just a hunch or a guess. I don't have proof because if I had proof I would know and it would no longer be a theory. But scientists use the word hypothesis for guesses.

Once a hypothesis has been tested and analyzed by careful empirical observation and re-analyzed by other teams of researchers, then it becomes a theory. The word theory is reserved for those guesses that have come to be accepted among members of the scientific community who work in that specific area. Scientific consensus leads to the use of the term theory.

Science is very specialized. Physicists are not considered experts on theories of evolution. Geologists are not really qualified to make judgments about the latest stem cell research. So stem cell researchers compete with other stem cell researchers to see who will be the first to make some breakthrough and cure cancer or something. If you are first you could win the Nobel Prize, like Watson and Crick did for the structure of DNA.

There are Theories, and then there are Theories
Theories are not laws in waiting. Theories are at the top of the heap. Remember that Einstein's theory is better than Newton's law. That doesn't mean that all theories are equal. Experiments in physics can be conducted to very high degrees of specificity and confirmed with the same high degree of integrity. Physicists often have decisive refutations of hypotheses or experiments. But not all scientists have that same luxury. Theories about gene-environment interaction or studies involving human diet are a less likely to have definitive falsification.

Nevertheless, theories are the best we can do. Theories don't eventually get "proven" and become laws. Theories are already proven. Still, sometimes new ideas do get called laws. There are several reasons for this. Sometimes I think it's just a throwback to the earlier use of law by people who have not thought through the distinctions. Moore's Law might be an example of this. It's not really a natural law or theory about the world. So we still get things called laws, but they are not natural empirical regularities. Scientists are not philosophers, they don't worry about language as much. But there are specific examples that might go against my theory.

Statistical Laws
There are still a number of laws being put on the books. Gustafson's Law, Metcalfe's Law, Reed's Law, etc. But it seems to me that these are just various types of statistical laws based on sets rather than the kinds of theories generated by traditional scientific empirical observations.

In summary, my point is that without the idea of a law giver it does not make as much sense to label the regularities of nature "laws." And besides this, scientists have also adopted a more humble attitude toward their enterprise. A scientific theory is not some kind of inferior guess or some child that will some day grow up to become a law like a bill sitting on Capitol Hill.





Sunday, January 4, 2015

Mind Over Body

Mr. Wright (not his real name, but that's what he's referred to in the literature), had cancer and was on the edge of death. His doctor figured he only had days to live. Mr. Wright was given a serum that he believed might cure his cancer. Sure enough, in just a matter of days his tumors disappeared. He was released from the hospital, happy and cured. A couple months later he read a report that the treatment he had received was no good. His tumors promptly grew back. Then his doctor convinced him that a new formulation of the serum was shown to be effective. He took it (this new formulation was just water) and once again he was cured. Another couple of months went by and he read more reports about that serum being a scam and completely useless. He died two days later. Thus is the power of our beliefs over our physical nature. We call this super power the Placebo Effect.

We all have this super power, which means medical research faces a particular challenge. Most scientific research subjects such as geology, astronomy, or paleontology do not have a vested interest in the research. A rock or a fossil does not care if you think it's a thousand years old or a billion years old. But a human has ideas about the research being done on him. And those ideas can change the very research being done.

If I give you a pill and tell you it will make the pain go away, there's a chance that your pain will go away even if the pill I gave you is nothing but a sugar pill. Presumably, sugar itself has no effect on your pain. So what's going on?
It's kind of like Neo in the Matrix: 
your mind makes it real. 



Here's a nice video explaining some
of the strange effects of placebos:

Because of the placebo effect, research on human subjects needs to control for it. What this means, in practical terms, is that people are divided up into groups. One group, for instance, will get the placebo and the other group will get the real medicine that is being tested. It's like a contest. If the medicine helps more people then the medicine wins, but if the percentage of people who took the placebo is about the same as the percentage of people who took the medicine, then that means that, as far as we can tell, the medicine actually didn't do any good.

No Better Than a Placebo

To be "no better than a placebo" is the ultimate failure for any kind of medicine, procedure, or other medical treatment.

Homoeopathy vs Placebo - as you might expect, the placebo wins.

Tylenol vs Placebo for Lower Back Pain - Placebos also win in contests where you might not expect them to win. For instance, acetaminophen can't beat a placebo for lower back pain.

You can do your own web search to find other placebo wins. It's not my goal to list them all. Whatever your favorite medical treatment for whatever ails you, it might not be a bad idea to search for placebo controlled studies to find out how effective it is.

Double-Blind

No, double-blind does not mean that the scientist conducting the study is blind in both eyes. It really means that neither the person taking the medicine/placebo nor the person dishing out the pills knows who is getting the placebo and who is getting the test drug. This is because we've found that if the doctor knows which pill they are giving to the patient they can give subtle hints that can adversely effect the study.

Even a horse is smart enough to read the subtle non-verbal cues. There's a famous case of the horse who could do math. His name was Clever Hans. People would give the horse simple math problems like 6-3 or 4+1 and the horse would tap it's foot up to the right number and then stop. What they found was that the owner of the horse would unknowingly give the horse a signal showing the horse that it was at the right number. When the owner was hidden behind a curtain the horse's math skills disappeared. Here's a picture of Clever Hans putting on one of his shows:


XKCD is right. Not every experiment can be double-blinded.

One of my favorite blogs, Science Based Medicine, has an article on Acupuncture that perfectly illustrates why blinding is important whenever it can be done. In this experiment someone who was not involved and had no knowledge about which children were getting the acupuncture should have done the evaluations. When the experimenter evaluates her own work she's not going to be the most objective observer. Blinding is necessary in order to counter confirmation bias, which we've discussed here.

So one of the important factors you need to consider when reading or hearing about some so-called scientific study is whether it was conducted properly. Doing good empirical studies on human beings is hard. So if it's possible, the study should include a placebo control group and, if possible, blinding both the patient and the doctor so that no one really knows who is in the study group and who is in the placebo group.

Actual medical research is more complicated that this. I would just like to take a second to point out a couple mitigating factors. For example, if there is a known effective (and by effective, I mean more effective than a placebo) treatment for some condition, then it would not be a good practice to withhold that treatment from people. So new medical treatments are sometimes put up against the known effective treatment in order to see if the new treatment is at least as effective, or more effective, than the known effective treatment. This means that not every piece of research will directly control for placebos. But if the known effective treatment was previously studied using a placebo control group then the new study can safely bypass a placebo control.

The other problem is that some things we want to know about are not testable in a laboratory very easily. Sometimes we want to know about lifelong habits and although this kind of study can be done on mice that only live two years, it's hard to do it on humans who are more likely to live seventy years. We'll deal with this problem in another post.

End Notes:
You can read more about Mr. Wright in The New York Times, Scientific American, or the original paper on Mr. Wright is here: Klopfer, Bruno, "Psychological Variables in Human Cancer", Journal of Projective Techniques, Vol.21, No.4, (December 1957), pp. 331–340.

Sunday, December 21, 2014

Reliable Process

I would much rather drive a car that's reliable than one that's not. I have two cars: one which has a lot of miles on it and has broken down a couple times recently. The other is newer and runs great. Which one am I more likely to take on a cross-country drive? The more reliable one, duh. It's the same way with things I believe. I would rather give positive assent to new ideas that come from people, places, and processes that are more reliable. Otherwise I might not make it very far in my intellectual wanderings.

One of the chief reasons to trust the scientific method, on a general basis, is that it has proven to be reliable. More reliable than any other method of assessing the world we live in. I think it's this reliability that justifies my trust in the scientific method. But there are a few caveats. The philosopher Alvin Goldman has this to say about how we are justified in believing any proposition: 
A person S is justified in believing a proposition p if and only if S's belief that p is produced or sustained by a process or method M and M is reliable. (Goldman, 1979)
Without going into a huge philosophical debate about all the various theories of justification, I just want to point out that many philosophers do not accept Goldman's thesis. I use it here because I think it serves to support a way in which we can be justified in believing the propositions of science. However, I want to make one slight modification. Goldman's statement is about "A person S" and that person's internal justification. It is not really about scientific justification. I think it's useful for us to take a look at because it describes how a reliable process might possibly be the basis for justification of one's beliefs. There is a difference between these next two propositions: 
a) I believe in the scientific method because it is a reliable process.  
b) When I have believed in some proposition accepted by the scientific community I have found it reliable and therefore I am justified in believing other scientific propositions that have scientific consensus. 
Now, a) and b) are not formal statements and are quite sloppy, but I think they are good enough for my present purposes. a) is an attempt to point to the reliability of everything that falls under the rubric of science. The problem is that not every published study is reliable. Not every experiment is performed reliably and not every scientist is reliable. Scientists are human and therefore human biases comes into play in their research. So if any one scientist is not necessarily reliable, and if every experiment that looks like it follows the scientific method is not reliable, then how can science as a whole produce reliable propositions?

Before I answer that question, I want to take a moment to explain something else. Why is it that I am talking about propositions? The reason is that only propositions have what is called 'truth value'. Rocks are not true or false. Even atoms are not true or false. Only statements or propositions about rocks and atoms are true or false. Our beliefs are about the rock or the atom might be false, but still, the belief is going to have some sort of proposition. I experience light hitting my retina, the optic nerve transmits information about that experience to my brain where it is processed. I then come to either believe that there is a rock sitting in front of me, or I might believe that I am hallucinating, or dreaming, or that someone is playing a trick on me. Philosophers refer to my belief as a propositional attitude. Other propositional attitudes are verbs like hope, know, wish, doubt, regard, feel, etc.

Back to where I was. Proposition a) says that I believe in the scientific method, even though I can have different propositional attitudes about specific papers or hypotheses. When a team of scientists published a report that they had experimental evidence that neutrinos could travel faster than the speed of light, I did not automatically believe the results. I did not say to myself: this scientific experiment proves that neutrinos can travel faster than the speed of light because I believe that the scientific method is a reliable process. No, I said to myself, that would mean that Einstein was wrong, and from what I know, this theory of Einstein's has stood up to numerous experimental evidence in the past. I'll just wait until it's confirmed by another scientific team. And, as it turns out, further experimentation showed that the initial experiment was wrong.

By holding out for further confirmation is actually what one should do when one believes in the scientific method. The actual method that the scientific community uses is one in which experiments need to be reproducible. Scientists themselves rely on other scientists to attempt to reproduce their results and affirm their hypothesis.

The scientific method seeks the consensus of the community. Scientific consensus is a little hard to define. Basically it means that different scientists agree that some theory is true or that it is at least reliable. But science is not a democracy. Mostly we measure scientific consensus by looking at published papers in peer reviewed journals on a particular subject to see where different scientists stand. The popular press is not a reliable informant on scientific consensus because the popular press is not as interested in truth as they are in finding interesting things to report. Unfortunately, scientific journals can also favor interesting research for publication, but this is considered a problem in the scientific literature in a way that it is not a problem in the popular press.

But this problem is being addressed. In fact, the scientific method is constantly under revision as we find biases in research and then address those biases. For instance, medical research is plagued by a the problem of the placebo effect. Once this effect was discovered, scientists invented more and more ways of controlling for this effect. Other biases like confirmation bias is also accounted for by using randomization and other statistical methods.

As we can see, the scientific method is not 100% certain, but overall it is reliable. And in certain fields and concerning certain theories there is higher reliability than others. I don't know of any scientific body that would deny that the Earth orbits the Sun. The degree of consensus about this theory is extremely high. Therefore I feel confident that my belief in the theory that the Earth orbits the Sun is justified. There are other scientific theories that do not necessarily have as much consensus. I tend to proportion the strength of my beliefs according to the strength of the justification I have for them. There are a lot of things I believe about the world that come from what the scientific community has discovered. I do not have direct observational proof for many of them, including the proposition that the Earth orbits the Sun. Some years ago, I head about some missionaries who were trying to convince some indigenous population that the Earth orbits the Sun. The locals were not convinced and asked the missionaries to prove it. Think about it. Could you prove that the Earth orbits the Sun? How would you go about doing that?

I'm not going to go into it right now. You can search the Internet for an answer. But I would just like to point out that the theory, in it's modern form, was first published by Copernicus in 1543, but the first observational evidence was not found until 1727 by James Bradley, and not again until 1838 by Friedrich Wilhelm Bessel. It took that long until we had telescopes powerful enough to do the needed experiment. Although Galileo had good reasons (which he published in 1610) to believe that the Earth orbits the Sun, his evidence was not conclusive. It wasn't until after Bradley and Bessel that we would have scientific consensus on the matter. Sometimes it takes a long time for the scientific evidence to become great enough for consensus. So far, no one has found good evidence to reject this theory. Astronomers still believe that the Earth orbits the Sun. And yet, in early 2014 a documentary came out in favor of geocentrism, which is the idea that the Sun orbits the Earth.

I'll stick with the scientific consensus. The scientific method got us to the Moon, Mars, and even outside our heliocentric solar system. It gave us cell phones, microwaves, cars, computers, and the Internet. If you can read this blog, thank a scientist.



End Notes:
  • Goldman, Alvin, 1979. “What is Justified Belief,” in G. Pappas (ed.), Justification and Knowledge, Dordrecht: D. Reidel.
  • Pappas, George, "Internalist vs. Externalist Conceptions of Epistemic Justification", The Stanford Encyclopedia of Philosophy (Fall 2014 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2014/entries/justep-intext/>.

Sunday, December 7, 2014

A Wanderer's Guide

When the Thirty Years War started in 1618, Rene Descartes was 22 years old. This war spanned most of the rest of his life. Instead of becoming a lawyer like his family wanted, he joined the army. So by the time he wrote his the book that changed the course of western philosophy Discourse (1637) (the full name is Discourse on the Method of Rightly Conducting One's Reason and of Seeking Truth in the Sciences), he had spend the past 19 years of his adult life in the midst on an ongoing war. Descartes was also a great mathematician. I mention these parts of his biography because, to me, they help make sense of his project. He sought order and certainty in an uncertain world. He attempted to give the whole world the kind of order and certainty that mathematics can provide. In this quest he systematically doubted everything he could until all he was left with was his doubt. Doubting his senses was easy. We've all been fooled at one time or another by our senses. But there was one thing he could not doubt, that was his doubt.

We're kind of stuck. Stuck with our own brains, bodies, and other brains and bodies. We have a few holes in our head that allow us to experience some of the world outside, but we cannot always rely on these holes to give us accurate information. Descartes was so convinced that his senses were not good guides to knowledge that he sought a sure foundation through reason and came up with "I think, therefore I am" (cogito ergo sum). Too bad reason isn't any better of a guide through this world than our senses are.

In this battle of ideas, John Locke had a bit of a different solution. He wrote that knowledge is "the perception of the connexion and agreement or disagreement and repugnancy of any of our Ideas" (An Essay Concerning Human Understanding (1689)). Instead of seeking ideas that were clear and distinct, as Descartes had done, he allows for those holes in our head to contribute to our ideas. Locke's famous analogy is to a blank piece of paper that our perceptions essentially write on. When we are born our mind is a blank slate and our experiences write on that slate. Then we can use our reason to compare our various ideas and try to decide which are agreeable to us. Repugnant ideas are to be cast aside. But in a sense, this is exactly what Descartes was trying to do. Descartes knew that he had a bunch of false ideas about the world and he was looking for a way to figure out which of those were trustworthy and which were the false ones. 

Instead of a cave, like Plato had proposed, I prefer to think of myself as wandering around in a forest. I would be happiest if that forest was Kings Canyon in California, or the Coastal Redwoods like these in Muir Woods National Monument:



There are bears and other dangers and it's easy to get lost. Our perceptions of direction and distance can be unreliable when we are hiking in the forest. But if we are prepared, and especially if we have a good guide, or guides, who can show us around, we can enjoy ourselves and discover some wonderful things about this world. 

I'm not really prepared to supply some final resolution between Descartes and Locke. The debate they were engaged in has been going on ever since the 17th century. What I do want to propose is that there are some good guides we can follow as we wander around. The scientific method has probably gotten us further than anything else through this forest of uncertainty. The scientific method relies on both our senses and our reason. Together they can help us overcome the limitations that either alone can offer.

A child sees a butterfly and starts to follow it. Fully engaged in using her senses, purposely pursuing her prey, she runs from one flower to the next, around trees and over a stream. After half an our of this she finally looses interest. She turns around only to find herself hopelessly lost. Yes, our senses alone can lead us astray. But reason alone, without actually looking to see what's out there is also of little use. As the father sits in front of his tent in the woods, he isn't going to get any closer to finding his daughter before the night comes. It takes both open eyes and a keen mind to survive in the woods. 

Sometimes our senses do deceive us, but we can analyze the situation with our reason and overcome these limitations. We can ask someone else to take a look and see what they think. We can recognize biases in our reasoning and come up with way to overcome these biases. Careful observation, detailed notes, statistical analysis, and having someone else try to replicate our data are just some of the ways we can gain a more useful understanding of the world we live in. 

Go out and go hiking somewhere. But don't go unprepared

Sunday, November 30, 2014

Confirmation Bias: I see it everywhere

Once I learned about Confirmation Bias I started seeing it everywhere.


Confirmation bias is like that. It wasn't until I bought an old Volvo station wagon that I began seeing them all over. Suddenly everyone was driving a Volvo. Or was it just me? Well, there are actually two things going on here. One is what psychologists call Inattentional Blindness. In inattentional blindness we simply fail to see certain aspects of our environment because we are paying attention to other things. There are lots of experiments showing this feature of our visual system, but the most famous is the selective attention test involving six people passing two basketballs around. By now you've probably seen it, but if you have not, then go ahead and watch it now:


Inattentional blindness is not confirmation bias, but the two seem closely related to me. If you have not been paying attention to something in your environment you will probably discount it from how you figure the world really is. But once you notice it, then you might even start looking for it. At this point you've moved into the realm of confirmation bias.

Confirmation bias is accused of causing all kinds of problems. It apparently affects us constantly as we go about our lives. But there are ways around it. Scientists are very worried about how it can interfere with empirical [based on observation] research. The empirical method is one of the basic building blocks of modern science so a human bias that interferes with that can do a lot of damage to the scientific enterprise.

My favorite way of thinking about science is to conceive of it as an ongoing attempt at overcoming human biases. Scientists have been developing more and more sophisticated ways of using statistical methods to make sure their data collection is a true random sample (or as close to random as possible). In medicine they use double-blind experimental methods to make sure the person conducting the research does not know who is receiving which medicine, and therefore she cannot knowingly, or unknowingly influence the experiment. There are other methods used for overcoming confirmation bias, but perhaps the most powerful method that the scientific method espouses is independent confirmation. If one team of researchers does a study and finds a particular result they then publish their research so that other teams can try to replicate the study. If no one can replicate it then the scientific community as a whole will not accept the conclusions of the original research. This is why scientific consensus is the greatest proof of a theory.

There's something you can do to help you start to mitigate the effects of confirmation bias as well as other biases bouncing around in your head: start a surprise journal. Get yourself a journal and start taking note of things that surprise you; things you don't expect. Look at these things. They can be clues to ways in which your own thinking might need to be changed.

My wife and I were driving down the 75 to Atlanta one spring and, being from Chicago, she said that there were a lot more license plates from Illinois than from other northern states. Suspecting confirmation bias at work I told her to get out a piece of paper and start keeping track of every northern license plate we came across. This is really the best to be sure that your observations are accurate. By the time got Atlanta she had one more license plate from Illinois than from Indiana. So, although she actually did get more Illinois license plates, she was surprised at how many Indiana vehicles she wasn't noticing.

So if you're really curious, you have to do the hard work of keeping track of everything. This is easy while driving down a freeway for a couple hours. It gets a lot harder when keeping track of every encounter you have with other people for months on end in order to figure out if the full moon really does cause people to act strangely. That's why so many people continue to believe that the full moon actually has a real effect. Even though it has been studied numerous times and there's no statistically significant differences between when the moon is full and when it is not, many people continue to notice more strange behavior.


Saturday, November 22, 2014

On Not Knowing Ourselves

One of the reasons people study psychology and even philosophy is because they want to know more about themselves. But it's not as easy as we first think it's going to be.

In Plato's Phaedrus, Socrates says:
"But I have no leisure for them at all; and the reason, my friend, is this: I am not yet able, as the Delphic inscription has it, to know myself; so it seems to me ridiculous, when I do not yet know that, to investigate irrelevant things."
Or as Wittgenstein says:
“Nothing is so difficult as not deceiving oneself.”
OK, I can't help it. One more quote, this time from Confucius:
"Real knowledge is to know the extent of one's ignorance."
And there are a lot more quotes like this by the likes of Shakespeare, Darwin, Bertrand Russel, etc. I think I know what's going on here. Intelligent, thoughtful, and educated people run into a lot of less skilled (ok, I really mean idiots) who think they have everything figured out. So they offer some advice and, hopefully, try to live up to it themselves.

I'm sure we've all run across these kinds of people. All you have to do is read a few comments below almost any article on the Internet. How is it that such stupid people can be so blissfully unaware of their ignorance?

One aspect of knowing ourselves is understanding our abilities and limitations. How intelligent are you? How good are you at reasoning, observing, understanding others, coping with difficulties, telling jokes? These, and many more questions, require us to have some sense of our own abilities as well as some sense of the abilities of others. But it turns out that we are really bad at this. One (but there are many more) of the reason's knowing ourselves is so hard is because we often overestimate our own abilities. And this is especially true when we are most incompetent. My goal isn't to seek a way out of the cave. What I'm interested in is finding a better way of navigating this world we live in. Enter David Dunning and Justin Kruger. They have actually put these things to the test, the empirical test, that is.

You Are Not So Smart has a nice interview with David Dunning. Dunning is best known for his part in developing the Dunning-Kruger effect which goes a long way toward explaining why it's so hard to live up to the Delphic maxim.  They describe it this way in their original paper:
"People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it."
Or, as the abstract to another article says:
"Successful negotiation of everyday life would seem to require people to possess insight about deficiencies in their intellectual and social skills. However, people tend to be blissfully unaware of their incompetence. This lack of awareness arises because poor performers are doubly cursed: Their lack of skill deprives them not only of the ability to produce correct responses, but also of the expertise necessary to surmise that they are not producing them. People base their perceptions of performance, in part, on their preconceived notions about their skills. Because these notions often do not correlate with objective performance, they can lead people to make judgments about their performance that have little to do with actual accomplishment."
What jumps out to me from the graphs in their paper is that most people just generally see themselves as slightly above average. Most people -- on average, and no matter what their actual competence -- tend to see themselves as somewhere between 60% and 70%. I must be especially prone to the Dunning-Kruger effect since I used to see myself coming in just around 80-90% in most areas of my life. Even though I know it's impossible for me to be that good in everything. Actually, I've never seen myself as good in art, music, and language learning. In everything else I was always sure I was better than you.

The thing is, I know that this can't actually be true. So I try, in all things, to live by the Principle of Fallibility. Basically, I could be wrong. I've been wrong before. I will be wrong in the future. What's to say that I'm not wrong now. If even most published scientific studies are wrong, what could protect me from the same fate? So now I rate myself much lower in most things, especially things I don't really have proper training in. But it's still hard not to see myself as above average. It's such a hard delusion to overcome. It might be that the only way to really overcome it is to become depressed. It's called Depressive Realism, but not everyone buys into it and I'm not planning on trying it anytime soon.

Since I'm too lazy to make my own YouTube videos, I offer this nicely done explanation of the Dunning-Kruger effect:


What I like about the TheraminTrees video above is that he points out that we can do something to improve the situation:
  • Training. I guess that if we could actually get out competence up to 60% then we'd have a more accurate assessment of ourselves. 
  • Independent feedback. If you aren't sure what your skill level is in something, and you could find an independently graded test in that area or find some people who might be capable of telling you the truth, this might be able to help your assessment. That is, if you believe it. The problem here is that people really close to the bottom don't seem to learn much from being exposed to the truth. 
    • I think this is what college really offers to students. When we read a book or even two or twenty books on a subject we feel like we have mastered the material. It's not until we take an exam or write a paper and turn it in for a grade that our self-understanding can be challenged. College forces us to face the challenge. If we are willing to listen to the feedback we can grow. 
    • Publishing on the Internet can also open us up to critics that might challenge our own self assessment. It can be intimidating. I'm intimidated right now. Who knows what's going to happen after I hit the Publish button?
I would also add that developing certain intellectual virtues could help mitigate the effects. Especially

  • Intellectual Humility:  Unfortunately, what I find most of the time is that what we really think is that if other people had intellectual humility they would see that I was right. This isn't the type of humility I'm talking about. For one, humility does not seek to humiliate or demonize the other. Humility is to open ourselves up to learn. There's a lot we don't know and a lot of ways in which what we think we know could be wrong. Keep that in mind next time you read a Wikipedia article. Have you also read all the references at the bottom of the page? If you haven't then how much can you really understand. The article is just the tip of the iceberg. 
  • Intellectual Courage: Courage is the hardest virtue of all. It takes great courage to question our own cherished beliefs. I was looking for a YouTube video on intellectual humility and found a video by a Muslim encouraging his fellow believers to have more humility. But what he actually was espousing was not humility at all. He first accused unbelievers of vanity for getting a PhD just so they can put Dr. in front of their name, but then told his followers not to question Islamic scholars because they have studies far longer than you have. I don't know if you can see the irony in this, but it occurs to me that he doesn't have the courage to question his own beliefs and therefore humility is just abstraction with no traction in the real world. 

If you want to know more about Dunning's work go to the Self and Social Insight (SaSI) Lab.

Question: Do people who actually think they are below the fifty percentile ever start academically oriented blogs? Isn't it a requirement that you think you know more than other people if you are going to start a blog that concerns critical thinking skills and philosophical investigations?

Sunday, November 16, 2014

Out of the Cave

It seems like whenever anyone reads Plato's Allegory of the Cave from the Republic, they always see themselves as in the light and anyone who disagrees with them as in the cave.

Here's a simple claymation rendering of the allegory.



Plato used this as an illustration of how his Forms exist on a higher level of reality than what we experience in our day-to-day lives.

Here's a longer (nine and a half minute) explanation of the allegory.


Basically, what Tim Wilson is pointing out in the video is that, although most people just read the Allegory of the Cave by itself, it helps to read the context in which it appears in the Republic. We need to know something about the Analogy of the Sun, and the Analogy of the Divided Line. This broader discussion is not what I'm interested in here, so I'll just let you read the Wikipedia articles on them if you want.

Wittgenstein and the problem of getting out of the cave

The problem of getting outside of the cave is similar to the problem of the blind men and the elephant. Both stories rely on some perspective outside of human possibility. For example, if the blind men and the elephant are used to show us that reality is different from what we each experience in our limited capacities, then that is assuming that there is a perspective outside of human experience. This perspective could only come from a super-human vantage point; a God's eye view. In the story, the king can see the whole elephant so the blind men look ridiculous with their limited experience. But we don't have that view, so all we can know is the limited experience of collective humanity. We can get outside of our individual perspective, but we cannot get outside of the experiences of the human race. (Lesslie Newbigin makes this same point.)

Wittgenstein does not tackle these things explicitly, but his thinking shows us the same thing. He uses the term "Form of Life," by which he could be taken to mean two different things, but for our purposes I appeal to his use as shared human behavior. The human form of life is the grounds by which we can speak to each other and learn from each other. This human form of life is based in what it means biologically to be a human being, but also socially in what he means to be embedded in a human culture using human languages to communicate. As he says,
"So you are saying that human agreement decides what is true and what is false?" -- It is what human beings say that is true and false; and they agree in the language they use. That is not agreement in opinions but in form of life. (Wittgenstein, Philosophical Investigations, section 241) 
In this section, Wittgenstein imagines someone questioning his philosophy, to which he answers that, in the wider sense that I take him, humans come to agreement within their form of life. This form of life is common to all humans since it is based on our biological makeup, but can differ some in terms of social and cultural embeddedness.

So Plato's cave is an allegory based on the presupposition that we can transcend the human form of life in which we are all trapped. There is no perspective outside the cave. It turns out the the king and his courtiers are also blind.

Just a couple thoughts on why I take Wittgenstein to mean biological makeup. There's one line in Wittgenstein where he says that if a lion could talk we could not understand it. Here's an entertaining discussion of that every same quote:



And here is a rather funny, if not informative, comic about Wittgenstein's lion.