I recently had an interesting discussion about experimental philosophy with Matt Teichman, host of the excellent Elucidations philosophy podcast. Since his remarks pertain to the history of philosophy I asked whether he might be willing to write them up as a guest post for this blog. Here they are - enjoy!
I had an interesting exchange with Peter Adamson following the interview with Josh Knobe
that I recently released. He mentioned a pet peeve about certain approaches to cognitive science-informed philosophy, which is that the researchers seem content to just note down 'what people think' or 'what people's intuitions are' and sort of just end the paper there. And that he liked how Knobe and I didn't end our conversation there, but went on to discuss whether he thought that the intuitions he and his colleagues had uncovered were correct.
I started thinking about why I had pressed Knobe on that question, and I realized that it was because there's a really interesting leitmotif running through the many experiments that Knobe and his many collaborators have done over the years, which doesn't get talked about that much, and which I wanted to try to bring out. To me, Knobe's papers don't read like standard issue cognitive science. That's not intended to be any kind of criticism of standard issue cognitive science experiments, which I enjoy reading about! I have never myself attempted to design or perform that type of experiment, but I am hugely indebted to the field in my own research. There is sometimes a tendency within the philosophical community to underestimate the incredible amount of skill and practice it takes to design a cognitive science experiment responsibly, which means it's worth taking a moment to sit here and appreciate how amazing they are.
Ok. Granting all of that, I now want to talk about how Knobe's work comes with a twist. In an ordinary cognitive science paper, the experiments are designed and carried out with a view to testing a hypothesis about how the mind works, or perhaps about how the mind develops through childhood. For example, I might wonder whether children are typically able to reason with the word 'most' by age three, and test the hypothesis that they are by designing an experiment that gives three-year-olds a bunch of reasoning tasks to perform using that word and see how they do. (Interestingly, even seven-year-olds, who exhibit near adult-like language abilities, have trouble with this. 'Most' is a real toughie!)
Anyway, the experiments comprising Knobe's research program over the years are not like that. I mean, they are definitely interested in testing hypotheses about how the mind works. But the twist is that they are also interested in extracting a philosophical worldview from experiments that examine people's intuitions. And extracting a philosophical worldview from people's intuitions tends not to be the focus of this type of research when it's done exclusively outside of a philosophy department.
So for example, in Intentional Effects and Action in Ordinary Language
he describes the interesting finding, sometimes nicknamed the 'Knobe effect,' whereby people intuitively describe the byproduct of something another person did as intentional when the byproduct is bad, and they intuitively describe the byproduct of something another person did as unintentional when the byproduct is good. Now, one response to this astonishing discovery might be to say that people's intuitions are confused or muddled. But interestingly, that isn't the path taken here. Instead, in follow-up work, such as The Concept of Intentional Action
, Knobe takes these intuitions seriously and tries to reconstruct the philosophical picture of what intention is that underlies them. What must intention be like, if it's the kind of thing that can somehow retroactively be affected by the byproduct of a person's action? The whole thing is pretty weird, but that doesn't mean it isn't fascinating. And it's certainly no weirder than a lot of other phenomena in the vicinity, such as moral luck.
I think that what I just described is similar to what historians of philosophy do. A great historian of philosophy will read a text by some author in an ancient language, do their best to achieve proficiency in that language, learn as much as they can about the historical circumstances in which the text was written, the cultural circumstances that gave rise to it, etc., and try to reconstruct what the author's take on some crux philosophical issue might have been. And the rest of us do more or less the same thing, in a comparatively amateurish way, when we read the classics of the canon in translation. Feeling your way into a position articulated by someone from a culture that no longer exists can be an incredible, transcendent experience, in which you temporarily jump out of the contemporary headspace and into a completely alien way of organizing your thoughts.
Great historians of philosophy also recognize that refusing to engage with a view that sounds strange is a bit too easy. The whole experience ends up being a lot more fun and enriching when you take what these texts are saying seriously and try to feel your way into where they're coming from, even if you don't agree at the end of the day. Perhaps especially if you don't agree. And there's something similar afoot in Knobe's work. Don't dismiss the philosophical view that people's folk intuitions are gesturing at out of hand as some kind of confusion, just because it seems theoretically bizarre. Try to determine how the view works and whether there's anything in favor of it.
In other words, you can see the history of philosophy and this type of cognitive science-informed research as providing two alternative 'inputs' to the process whereby you try to figure out whether a given philosophical view is correct. Alternative, that is, to the 'input' that involves you stating what you think about some issue, before you assess whether it's correct. In the one alternative, you do your best to reconstruct what someone at a historical and cultural remove thought about the issue. In the other, you reconstruct a stance on the issue that's implicit in people's intuitive judgments about hypothetical scenarios pertaining to it. Both alternatives yield the type of view that seems deeply alien at first, but turns out to hang together pretty well once you've broken down how it works.
Furthermore, I would conjecture that a lot of this research gets carried out specifically with the goal of unearthing interesting new philosophical views that are implicit in our behavior. It's as though the people who design these experiments design them in the way they do specifically because they're on the lookout for behavior patterns which, once sorted through, will yield genuinely new contributions to debates about the nature of action, intention, free will, moral responsibility, and so forth. It's really cool! And I suspect it is a big part of what has kept readers coming back to this type of work over the years.