Future, Present, & Past:
Speculative~~ Giving itself latitude and leisure to take any premise or inquiry to its furthest associative conclusion.
Critical~~ Ready to apply, to itself and its object, the canons of reason, evidence, style, and ethics, up to their limits.
Traditional~~ At home and at large in the ecosystem of practice and memory that radically nourishes the whole person.
Monday, January 31, 2011
I am not a mathematician, but I occasionally dabble in recreational mathematics. One thing I want to occasionally do with this blog is to throw out some observations or questions that occur to a layman like myself, someone who asks questions that are probably simple enough but to which I don't know the answer. These aren't meant as "brain teasers" (though sometimes they serve as such for me or for some of the kids I work with); I'm just leaving them out in case someone wants to point me to an obvious solution (or at least a method). I'm painfully aware that in asking this I make clear my own mathematical limitations, but there it is.
Everyone knows that the ratio of a Euclidean circle's circumference to its diameter is π:1. And these days, we also are familiar with the fact that Euclidean geometry does not transfer automatically to other surfaces. Thus, on a the surface of a regular 3-D sphere, the ratio of circumference to diameter changes depending on how big the circle is. You can visualize this most easily by imagining a globe. The "great circle" drawn on the globe by the equator has a diameter that is exactly half of its circumference; imagine the diameter drawing a line from one point on the equator, up through the north pole, and back to a spot on the equator 180° removed from the starting point. This diameter is demonstrably half of the circumference; you could slide it down the face of the globe till it coincided with half of the original circle. Thus, on a sphere's surface, this circle's ratio of circumference to diameter is 2:1.
But not every circle will have a diameter of half its circumference; only a circle as big as the equator (a "great circle") will have this. Any smaller circle (and no circle on a sphere's surface can be larger, of course) will have a circumference whose ratio to the diameter is larger than this. The smaller the circle, the larger this ratio. You can visualize this if you imagine shrinking the circle, starting at the equator and edging up closer towards the pole. The closer you get to the pole, the more you approximate the Euclidean plane, and so the closer the circumference:diameter ratio gets to π:1. (This is so for the same reason that the Earth looks flat if you are standing on it. A sufficiently small circle can leave the curvature of the sphere's surface out of account.)
This is what occasions my question. If this ratio starts (at the equator, or any great circle) at 2, and reaches π at the pole (or at any geometrical point), then there is a particular size of circle on the surface of a sphere where the ratio of circumference:diameter is precisely 3:1. Where is this circle? I.e., how big is it in comparison to a great circle on the same sphere? (Or, say, to the diameter of the sphere itself?)
What is interesting about this (to me) is that one can see, in the case of a great circle, why the ratio has to be 2:1. And you can just manage to stipulate that, since the point is where the sphere becomes a "virtual plane," so to speak, Euclidean values become valid there...well, because they do. (Just why Euclidean constants are as they are is a different question.) But in the case of the circle I've got in mind, there's not really an obvious reason why the integer 3 should obtain in that ratio just there.
I've no idea how many SCT readers are qualified to venture advice here for an amateur, but this is obviously not one of the really hard questions; it's just an interesting one that happens to have me slightly out of my mathematical depth. If anyone wants to pipe up, I'll be more than happy to hear from them.
Tuesday, January 25, 2011
At the beginning of January, a post by X-Cathedra at Well at the World's End, on Christian atheism, followed fast upon the heels of a discussion of agnostic Christianity by James McGrath at Exploring our Matrix. This is a very late (in internet-time) reflection, and given other commitments is somewhat unfinished too. Some of this is expanded from a comment I wrote on the latter blog. First, some quotes.
God may indeed have survived the tug-of-war between liberal Protestantism and post-liberalism... but it is clear that atheism, agnosticism, and secularism are more culturally ingrained and conceptually viable than they have ever been in history. And how full the pews are does little to affect this. As Charles Taylor notes, what's distinctive about our secular age is that God is understood to be simply one among many competitors in the marketplace of ideas.From Exploring:
not only can be Christian agnosticism, but that in fact that is all we have. There are no people who have actual historical certainty about every historic Christian claim about Jesus. There are only people who have managed to attain a feeling of certainty. But being honest about the uncertainty, even though it can be unsettling to feel it, is not at all something to be ashamed of. Instead of describing it as "agnosticism" we could also call it "honesty."
Whether or not we agree that Christian agnosticism is "all we have," (this again depends on "how we define" the terms), it is certainly not an unprecedented position. See, e.g., the writings of Leslie Weatherhead, EH Johnson, HG Curteis, Alexander Mckennal, etc. Before anyone responds that there are no new heresies under the sun (or indeed among the clergy), I'll just offer my own favorite formulation of this position, from (non-cleric) Eric Voegelin -- hardly an accommodationist apostle for the latest wind of doctrine -- who said in The New Science of Politics that "uncertainty is the very essence of Christianity." (p 122.) (Voegelin called himself a "pre-Nicene Christian," for what that's worth). It is worth recalling that the early church had a life-&-death struggle not with atheism (a label often applied to Christians back in the day), but with, as St. Irenaus referred to them, "Gnostics, so-called."
I have already had recourse to Voegelinian uncertainty (e.g here), so let me add here some remarks on "Christian atheism".
To speculate loosely about the connection between these two developments, I would hazard that Christianity itself, with its emphasis upon the historical Incarnation ("under Caesar Augustus," "while Quirinius was governor of Syria"), made almost inevitable the eventual short-circuit that would give us "Christian agnosticism." The strange marriage between the historical and eternal that Lessing found to be so problematic is the very center of the faith.
And this claim is also at the center of the death-of-God theologies. Altizer, Hamilton, and so on are all too often read as symptoms of their cultural moment, but what makes their claim radical is a certain ambiguity. It is not always clear whether and how these theologies are referring to "God", and in fact one often gets the impression that the claim is that God really had died; not the idea of God, not the cultural currency of God, not a believable story about God. This was, it will not be missed, also the claim of the New Testament. The New Testament invented the "death of God," to the scandalization of the world. In one motion the church proclaimed that God was the creator of the cosmos, of an ontologically different order; and then that this creation had pulverized Him into as little as it makes of everything; a corpse.
And then, of course, the claim is that this order, the order of fallen creation, found itself turned inside-out. This is the part of the story that the death-of-God theology left out. At this point, we suddenly revert to the terms of the world that was scandalized by the claim that God could die. Now we are back to a world in which death remains death, and the death of God just is the revelation that all God ever was was the idea, the cultural currency, the no-longer-believable story. This is why it is not surprising that a kind of "agnostic Christianity" arises.
The paradox at the heart of Christianity is inverted by the death-of-God theologies in such a way that finitude is (impossibly) absolutized. I can think of no better summation of this than Sartre's argument towards the end of Being & Nothingness that the project to be God is contradictory. But Christianity has been there ahead of him.
Yes, there was a claim that this made humanity ultimately responsible for itself. And this was true; but in what context does this responsibility obtain? If such "responsibility" means bravely building our fires until an inevitable infinite night, it is only another name for despair. This is why, for all my respect for Altizer (and I will insist on this; if you just shrug him off, you have not understood him), the so-called radical Christianity that was the scandal of 1960's American theology is a capitulation, not an articulation, of faith.
And given this, it is hardly surprising that some prefer the illusory safety of the halfway-house of agnosticism. Having said this, I'll add the caveat that this sort of retreat to agnosticism seems to me to be more frequent among onetime (what is sometimes semi-disparagingly called "recovering") evangelicals and/or fundamentalists. I do not know as many Catholics or Orthodox who would feel the need to qualify their faith in these terms. But I emphasize this is just an unscientific impression.
I'll also add that McGrath's valorization of mysticism and apophaticism in the face of skeptical critique of historical claims is well-taken, but equates only loosely with agnosticism. Agnosticism as the claim that one does not know seems to be more or less amenable to apophatic theology, but a strong agnosticism (a.k.a. "skepticism") is (despite appearances) less so. This is counter-intuitive. Because apophatic theology insists that God's nature remains unsearchable, one might at first glance assume it is simply skepticism applied to theology. But the kind of knowledge in question differs. It would take us far afield to explore this very deeply, so I will avoid generalizations about ancient conceptions of knowledge, but in general it strikes me that modern agnosticism sees knowledge as a primarily intellectual affair. Knowledge here is the kind of thing that one gives in answer to questions; it is formulatable. Augustine famously said that he knew what time was, so long as no one asked him. To the modern agnostic, knowledge is the sort of thing Augustine did not have when he was asked. To the apophatic theologian, knowledge may well be the sort of thing Augustine did have when he was not asked.
Even this knowledge, of course, fails when it comes to God. But the distinction is vital. If I retreat to agnosticism in the face of skeptical critique and think I am validated by virtue of the great examples of apophatic theology, I am mistaken. The cloud of unknowing is deeper than any intellectual skepticism. As a Zen proverb has it: No doubt, no enlightenment. Small doubt, small enlightenment. Great doubt, great enlightenment.
But then again, if we use knowledge to mean this sort of mental event that modernity means by it, rather than the noetic event of understanding the ancients have in mind (and I know, I know I am begging the question of just what that distinction implies), then it has to be said that the modern agnostic, "Christian" or no (McGrath included), is right. That really is "all we have." The question is, is it all we have to settle for?
Sunday, January 23, 2011
(Cross-posted, with some modifications for a different audience, at The Clearwater School blog.)
Most SCT readers know my pedagogy is based on the notion that we do not learn best what someone else has decided we ought to learn. My model stems from the practices at the Summerhill, Sudbury Valley, and Albany Free Schools, which all downplay (if not outright dispense with) classes, grades, age segregation, and curricula. My educational heroes are people like A.S. Neill, Mary M. Leue, Ivan Illich, Hanna and Daniel Greenberg, John Holt, Jonathan Kozol, Alfie Kohn, John Taylor Gatto, Sir Ken Robinson, David Deutsch, Sarah Fitz-Claridge, Peter Gray. There are serious differences between these thinkers; they do not all agree with each other about everything (nor with me, more's the pity!) But their orientation towards democracy and children's rights is fairly clear.
Recently there have been two pieces of media that highlight aspects of this orientation. I commend both of them for the way they present these models in three-dimensions, so to speak.
The first piece is a new short (13 minutes) film recently produced by Sudbury Valley School. You can view it here at the blog of The Clearwater School (where I am a volunteer). (The embedded video is small, but if you click on it it will give you a full-screen option.)
About two and a half minutes into the film, SVS graduate Ben mentions that many parents ask, when they are first exposed to the Sudbury model of education, "But--what if my kid just plays video games all day?!" (Ben notes that this is more or less what he did during the first of his four years at the school.) This issue also comes up in the other piece I want to mention. The Brooklyn Free School was recently featured in an episode of N.P.R.'s This American Life. The segment (Act 3 in the show) addresses the school's commitment to empowering students with all the decisions involved in running their school, which includes the degree of use to which computers will be put.
There's a great deal more to both SVS's film and This American Life's radio segment than computer games and movies. But for the rest of this post I want to focus on this issue, because in my experience, Ben is right. This question comes up again and again, and as the Brooklyn Free School learned, it may need to be asked over and over again by the students themselves.
I volunteer at Clearwater, but I make my moderate living working for an after-school program at a public elementary school. My program allows me to give my students a good deal of autonomy, but the notion of letting them just "do what they want" brings reactions from my co-workers that range from blank stares to deep you're-joking-right? discomfort. Surely, it is assumed, it's my job to give them "projects"-- mini-lessons in science, art projects in clay or wooden craft sticks, songs we all learn together. Won't they just waste their time if I don't? And when it comes to computers (I am able to make the computers in the school library available for not quite an hour and a half every week) well, maybe they could be using the computer to, you know, research something or finish their homework, but you wouldn't let them just play games? or watch videos?
I'm going to mainly talk about games here, though a lot of my considerations apply to videos (and I mean either mass-media or homemade) as well.
The concerns that arise seem to me to be motivated either by concerns about content (potentially violent or disturbing images, actions or plots), or about the medium itself (computer games being a “waste of time,” “addictive,” and so on).
My thoughts on this are in process and revisable, but they are also the fruit of long reflection and practice. I should first say that I have a threshold for what I consider “appropriate” content at my work. This standard is far stricter than what would be countenanced at Clearwater (anything less than AO, the resident student tells me), or than I would eagerly welcome in my own home, for instance. The reason for this is simple: job security. One or two angry parents are all I have needed to encounter before I decided to err on the side of over-compensating caution. In general I am prepared to trust the school district’s internet filtering program, but I keep a close eye on the browsing and playing that students do. So far I've never felt the need to tell a child they couldn't watch what they were watching, but I've had plenty of discussions about online content with kids. What I've found is that kids (1) can take in a tremendous amount of variation in even a short while online, (2) are capable of thinking critically and creatively about it and will do so aloud with you if they trust you and feel the need, and (3) are very good at enforcing their own “screening.” Whether its a game that's too violent, or a Wikipedia article with too-much-information about sex, material that triggers kids' own internal repulsion does not stay on their monitors.
Computer games were in their infancy during my formative years and so I spent little time engaged in them as a child. (Arcade games held some appeal but were too noisy, cost more quarters than I wanted to spend, and I was rarely very good at them.) Consequently, I could not at first empathize with the unabashed enthusiasm for these games which I meet in kids. It took me a conscious and intentional effort to familiarize myself with them. I played alongside students and I played with my stepson. I have acquired a significant respect for the art and imagination of both the design and the play of computer games, which I almost entirely lacked when I first started working with students over a decade ago. Far from being a single monotonous activity (as the dismissal “just playing video games” might imply), such games are complex discrete units designed to build competencies in attainable steps. The advanced dexterity and the strategizing required will often hamstring anyone who tries to navigate one of the higher levels of a game before mastering the basics. This was borne home to me over and over, and it alone ought to have persuaded me that the notion that no learning was happening in these games was naïve.
It took me longer to come 'round than it might have; not because the games weren’t really learning tools, but because I actively resisted seeing them that way. It took me a long while to get over what I eventually conceded was a prejudice against the form of the game: I just didn’t like video games! I was reacting against the form; I found them strange and hard to understand, “cartoony,” and trite. My reasons weren’t all compatible (“too difficult” and “too simple,” for instance); but so long as I was content not to examine my motives, they tended to reinforce each other anyway.
My reticence was finally overcome when I asked myself: what's the salient difference between a computer game and any other game? Say, a computer version of Monopoly. I am not a fan of Monopoly--like most grown-ups I know, I find it tedious and frustrating--but I am at a loss to say why a board game that (despite my personal distaste) would never be banned from my classroom, should be any different from a version played on a screen. And once I have conceded this, I fail to see why games that more fully exploit the medium they employ are any less appropriate; indeed, they are arguably much more so, since they actually do familiarize players with the technology which is indisputably going to be no trivial part of our culture for the rest of our lives.
When I watch kids in my room play these games, I am struck by how social they are. They are not staring at a screen doing nothing; they are vocal, mobile, often jumping up to see what someone else is doing. They are excited, engaged, and interactive, not just with the game but with each other; far more so than they would be if, say, they were reading a book. Whatever is going on with the game, the kids are also navigating the always-more-complex-than-you-think terrain of peer society, not the least considerations of which are fairness and turn-taking, but also learning how to teach and learn from each other.
I regard the students in my class as capable of making responsible decisions for how they conduct themselves and I have found time and time again over ten years that they fulfill those expectations, and follow their passion if I get out of the way. But of course, students have more than one passion; the artist and the runner are often the same kid. A child has limits just as I do, and boredom sets in sometimes. In my experience, a child will indeed get bored with running, or drawing, or a computer game, in his or her own good time (and, chances are, not on my schedule), when they have stopped learning what they are interested in.
This is why, beyond all of the considerations I mention above, salient and even vital as they are, there is one concern which grounds my whole approach, and which would obtain even if I agreed (as I don’t) that computer games, or any other activities the kids pursued, wasted their time. It is often noted that my classroom style is somewhat “free.” This is a word I like and that I take very seriously. One of the most central values I have is respect for the autonomy of your children. Because my primary motivation is always to cultivate a respectful and honest relationship with each child, I want to give them exactly the same respect that I want for myself. It is true that sometimes I myself waste my time--by my friends’ standards, my family’s standards, even my own. I might fritter it away on television, or oversleep, or read a comic book, instead of working in the garden or writing my next essay. I might be decompressing after a hard day, getting valuable and much-needed down time; but let us assume I really am, even by my own standards, “wasting time.” Even assuming that this could be evident to the outside, I would still not want my wife or my best friend to tell me that I had to stop what I was doing, to impose a rule on my behavior that told me I had to do something more “worthwhile.” My wife might remind me that I have promised to wash the dishes; my friend might suggest that we have a jam session or even that I might find it rewarding to read this book he’s been recommending. But these suggestions are made in a very different spirit than laying down a rule or a demand. Would anyone say that the way to address this would be to invoke authority?
This is what it comes down to for me: respecting the right of a child to decide what to do with his or her time. And I have found that if I cultivate respect for the children I work with, I can have far more fruitful engagements with them about things that matter, including the things they will, sooner or later, wind up being "exposed" to--"adult content" included.
Thursday, January 20, 2011
Bruno Latour is sometimes derided, sometimes praised, for having made a science itself the object of study, and for pointing out the inextricably human politics that go into what gets science's imprimatur. Two recent articles have me thinking about some aspects of these politics.
Depending on who you ask, the publication of a peer-reviewed parapsychological study is either a scandal or a refreshing example of free inquiry. The Journal of Personality and Social Psychology, an academic journal with a good reputation (or it was), has printed a paper by Daryl Bem of Cornell University, a name and an institution with some respectability. The study reports two different experiments that, Bem claims, show there is reason to think that events in the future could impact the human mind. One experiment showing a 3.1% better-than-chance results when participants were asked to predict on which of two screens a picture with erotic content would appear. (The control group's non-erotic pictures produced results that stayed within the margins of chance alone). The other experiment asked volunteers to look at a series of words, then gave them a surprise quiz asking them to type in the words they recalled. After this, the computer randomly selected 24 words from the series and asked the subjects to type them again. The words that subjects re-typed (after the recall test) tended to be the words they had done better at recalling.
Now, eerily, even before I read about the critical reaction to Bem's paper, I somehow just knew, you know?, that the Committee for Skeptical Inquiry would have some thoughts on this. I also knew that the CSI would refer to the experiments of J.B. Rhine in the 1930's. It's eerie.
But perhaps my thoughts on Rhine were triggered by a recent New Yorker article, by Jonah Lehrer, on scientific inexactitude. This article is about the "decline effect," the tendency of a number of well-established experimental results across scientific disciplines to trail off with repeated investigation. That is: very well-designed experiments which seem to show robust correlations tend, on repetition, to yield less and less impressive conclusions. Rather than becoming more and more secure,
all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.So scientists attempting to replicate results are coming up short; so what, you might say--this happens all the time in science. Failure to replicate is probably the norm, which keeps one-off flukes or unintentionally engineered results from getting widely accepted:
The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.But this phenomenon is different: this is the replication of already well-established research, research that had already passed the hurdles of scientific respectability, including, peer review and, well, replication.
Among the many complaints that Daryl Bem's results have occasioned is that there must be some problem with the design of the experiment. One comment on The Last Psychiatrist's post on this subject puts it succinctly:
There is a common problem that peer review is specifically designed to avoid. There are often results that seem strange or unexplainable to newcomers to a field that are actually well-known problems of experimental design (i.e. you're not testing what you think you're testing). This is where the experts come in; they have seen these errors before and can point them out before they propagate.The problem is that Bem's results are not those of a wet-behind-the-ears grad student. One can still say that he (or the Journal of Personality and Social Psychology) should have asked different experts (The Last Psychiatrist thinks it should've been physicists; a lot of commenters have suggested statisticians). See too NPR's Robert Krulwich's musings on this.
The scientific process in a nutshell: you notice a phenomenon that you want to account for. You frame a hypothesis. You construct an artificial circumstance in which the only variable is the mechanism of your hypothesis. If your phenomenon is unchanged when your mechanism changes, and you have rigorously screened out all other possible changes, your hypothesis is disproven. If, on the other hand, your phenomenon changes as you alter your chosen mechanism and nothing else, you may consider your hypothesis validated.
This little synopsis will be modified and stretched and clipped and spun by different philosophers of science, but in essence this is the scientific method, a wonder of parsimony, elegance, and indifference.
Of course there is a snag: the little word "only". How possible is it to alter only one circumstance? This is at least part of what the commenter meant by "you're not testing what you think you're testing." And now it seems to turn out that all sorts of random effects might squeeze into an experiment be it never-so-hermetically-sealed. This is at least one possible reading of the experiment, mentioned in the New Yorker article, which reproduced as minutely as possible the circumstances of a test of the effects of cocaine on mice. Same cocaine. Same dose. Same breed and age of mice. Same time in captivity, same dealer. Same cages. Same bedding material. Same etc., etc., etc. The only difference was location: In Portland and Albany the coked-up mice moved six or seven hundred centimeters more than usual; in Edmonton, Alberta, they moved over five thousand centimeters more. But different tests sent the stats of different labs' mice into outlier region. In other words, it might just be noise, but noise you can't screen out.
Or then again, maybe reality just wants to play tricks. Maybe it adjusts to your findings, in a kind of reversal of Rupert Sheldrake's morphological fields, so that rather than spreading, a breakthrough insight gets canceled out. Or maybe, as per Quentin Meillassoux's hyperchaos, whereby the laws of nature could change at any moment, the laws of nature are in fact changing at every moment. Or maybe what you can't screen out is fundamentally relevant, not noise at all, but either something you can't correct for, or something you'd never think to correct for. Maybe, as Heraclitus said, "Nature loves to hide."
The "decline effect" has been getting attention from Jonathan Schooler, who was frustrated by the difficulty he was having at replicating his own results, results which had made him famous in the world of cognitive psychology in 1990, concerning what he called "verbal overshadowing," or the notion that having described faces in words actually makes faces harder rather than easier to visually recognize. Schooler's initial results were striking: subjects who had watched a video of a bank robbery and then written a description of the robber identified the robber from photos later with an accuracy of about 38%, as opposed to 64% accuracy in those who had not made this written description. This is a significant result, and (assuming the experiment were well-designed in the first place), ought to be replicable. But Schooler himself found his results dwindling; the effect would be there, but less starkly. It dwindled by 30%, then another 30%.
A profoundly troubled Schooler looked into the work of a predecessor: the aforementioned J.B. Rhine, whose investigations into E.S.P. in the 1930's found one test subject who was astoundingly good at guessing (or "seeing", depending on what you believe) the faces of Zener cards. At least, he was good for a while; whereas most of Rhine's subjects were able to guess rightly at about the 20% chance-rate (there are five cards), for a while Rhine's star subject, Adam Linzmayer, would sometimes guess at a shocking near-50% rate. In fact, initially, Linzmayer guessed two different nine-card runs at 100%, and for a very long while his record remained in the upper 30's. Critics like to pooh-pooh Rhine's results with the claim that his experiments were sloppy (and some were), but what is really interesting is the fact that Linzmayer's high results did exactly what other results do, results that no one has dreamed of accusing of being fraudulent: they declined over time. Eventually, Rhine postulated that Linzmayer was bored or distracted; in any case, something was interfering.
in 2004, Schooler designed an experiment "testing" for precognition, but his real quarry was the decline effect. His experiment is structurally very like Bem's. Schooler asked test subjects to identify visual images flashed momentarily before them. The images were shown very quickly and usually did not register consciously, so subjects could not often give a description, but sometimes they could. Half of the images were then randomly chosen to be shown again. The question Schooler asked was: would the images that chanced to be seen twice be more likely to have been consciously seen the first time around? Could later exposure have retroactively "influenced" the initial successes?
The difference between Schholer's and Bem's experiments is not in the design, but in the aim. Schooler "knows that precognition lacks a scientific explanation. But he wasn’t testing extrasensory powers; he was testing the decline effect."
“At first, the data looked amazing, just as we’d expected,” Schooler says. “I couldn’t believe the amount of precognition we were finding. But then, as we kept on running subjects, the effect size”—a standard statistical measure—“kept on getting smaller and smaller.” The scientists eventually tested more than two thousand undergraduates. “In the end, our results looked just like Rhine’s,” Schooler said. “We found this strong paranormal effect, but it disappeared on us.”Bem, according to the New York Times, has received hundreds of requests for the materials to replicate his study. Since the materials included a good stack of erotic pictures, we must exercise some charity in the surmise as to the motives of researchers. Now here is my prediction: Bem's results will decline just as Schooler's did, and this will tend to validate critics' dismissal of his initial study; they will not ask themselves about the initial findings, just as none of them asked themselves about Schooler's. And we will still not know why the results flatline.
If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
Monday, January 17, 2011
A good discussion on Mark Goodacre's blog about the Jesus-never-lived myth. For those who don't know, this is the curious notion that Paul invented Jesus more or less out of whole cloth (sometimes with a liberal dose of Hellenistic mystery-religion, sometimes straight out of the ferment of post-Maccabean Judaism), and that the Gospel-writers came along later and retrofitted the character Jesus with some biographical details (depending on the theorist, this may or may not have been Paul's intent).
This dovetails with two other recent posts on the same theme (they seem unrelated but there must be a meme revival going around): The first is from Quodlibeta, the second from Exploring our Matrix.
This stuff strikes me as as close to historical revisionism as one can respectably come these days. Shakespeare/Bacon/de Vere/(choose your favorite)-- this one is making a comeback. Fashions change, but one day I wonder what else will be subject to the debates of historians. The Holocaust? The moon landing? 9/11? (Addendum 1/19/11: reflecting upon Benoît's comment [below], I add that I do not consider these intellectually or morally equivalent.)
I happen to have a big ol' soft spot for minority views, so revisionism holds a certain fascination for me (I've read a bunch of Velilkovsky, for instance, and some global warming skepticism, and psychic research, and am always interested at finding the point at which the experts get their hackles raised). But I don't necessarily buy into these Fortean accounts. I read them for the little factoids they tend to select for, and for the humbling awareness they foster, that one is constantly dependent upon the say-so of experts. When you first realize this, it is tremendous splash of ice-cold alienation right in the face. Then you realize that you were always alienated, and now you know it-- and can start to negotiate your life differently.
But what about the true believers, who really do buy in? What can motivate a scholar to buck scholarly consensus like this? Probably any number of things, and in the case of the mythicists, it would be naive to reduce them to some lingering animus against Christianity--wouldn't it? I am sure if you ask them, nine out of ten will answer, like Dave Fitzgerald answers Jim McGrath, that they didn't start out a "mythicist;" they were persuaded by the evidence. I take this in the same spirit as I would the assurance that a scholar started out unpersuaded (or antagonistic) about Shakespearean authorship-controversy or 9/11 revisionism or whatever. ("Actually, I set out to prove that the case for Intelligent Design was nonexsitent; to my surprise...") It is not that I think these intellectual positions can't be honestly held. But can one honestly hold that one did nothing but weigh evidence, and weigh it exclusively on its own merits? How was it that you got so lucky as to just be capable of that degree of impartiality and incisiveness?
There's a potential red herring here, since this question has no direct bearing upon the truth or falsity of the positions in question. Edward de Vere could have written all the Shakespearean plays and poems, even if every last person convinced thereof was so persuaded because of resentment, and every last defender of 'the Stratford man' a paragon of disinterested integrity.
So, no, I don't presume that Fitzgerald or Richard Carrier started out just looking for another way to undermine Christianity and hit upon the mythicist hypothesis as the apparently best way, anymore than I think David Ray Griffin was already just looking for an excuse to accuse the U.S. Government, or "powerful forces within it" at any rate, of carrying out a false-flag operation to plunge us into a never-ending "War on Terror."
But I can't help but be struck by the weird audacity of the cause. I mean, sure, if you could finally prove, from documents yielded grudgingly by the CIA or somewhere, that Oswald Did Not Act Alone, that would certainly justify twenty or thirty years of scrounging in microfiche and being mocked by the culture at large. But if you could prove that the central figure of two millennia of Western culture never existed... knocking down Lenin's statue or the Berlin wall would be nothing compared to that. This may not be what motivates every argument Earl Doherty or G.A. Wells makes, but I'd be astonished if it hadn't occurred to them.
(Addendum: James McGrath has put up an index of his posts on Mythicism so far. Check it out).
Saturday, January 15, 2011
Last year I posted a somewhat disjointed critique of Paul Berman's attack on Tariq Ramadan. My reaction was to a long essay Berman published in The New Republic; I want to point now to a more measured review by Hussein Ibish of Berman's book The Flight of the Intellectuals, "important and frequently brilliant, but also seriously flawed," of which the article in question is a sort of core.
Ibish makes a mostly positive assessment of Berman's attack on Tariq Ramadan. He concurs that Ramadan tries to "reassure" everyone (Muslims moderate and radical, westerners liberal and conservative), and that he has conflicting intellectual loyalties (to the traditions laid down by his grandfather and father, and to the enlightenment values of tolerance and reason). On the other hand, he strongly dissents from Berman's portrait of Ramadan as an apologist for terrorism. I think he puts things too strongly when he agrees with what he calls
Berman’s damning and persuasive conclusion is that Ramadan “cannot think for himself. He does not believe in thinking for himself.”Ramadan is attempting, and may well be failing at, something not easily pulled off. This is the marriage of a traditionalism, and a non-western traditionalism at that, with post-Enlightenment rationalism. The former quite expressly has a very different evaluation of (and perhaps even meaning of) "thinking for oneself" than does the latter. The effort to weave these two strands together is not a simple one; it may be doomed to failure. But certainly it cannot be judged simply by the standards of one or the other. To offer such judgment is simply to prescind from the effort in the first place.
Ibish gives an example from Ramadan's book Western Muslims & the Future of Islam, in which Ramadan lays out some of the principles of Qur'anic exegesis, especially as regards ijtihad or legal interpretation (especially independent interpretation). After a two-sentence summary of a fairly detailed account of this centuries-old (and far from unanimous) tradition, Ibish sums up:
Modern minds are reassured that even religious texts require interpretation; traditionalists are reassured that explicit texts do not allow for interpretation; and everybody is reassured that there are, in fact, very few genuinely explicit texts and that lots of interpretation will be necessary.This cavalier dismissal is all too brusque. Ramadan's work is a serious effort to describe a tremendously complex and contentious process that has unfolded for more than a millennium and generated dozens of competing schools; it is almost transparently tendentious to read it in light of Berman's remarks about "reassurances in every direction." Ibish is on somewhat surer ground when he remarks that Ramadan's principles here are methodological and thus do not necessarily have any results that wary westerners might welcome; the principles are all on the meta-level, not the practical. Ibish is vague but critical on Ramadan's contribution here:
having described the process, Ramadan has almost always failed to play a positive role in shaping the interpretation in the right direction, which renders his contribution, at this point anyway, largely pointless, if not negative.This may be true (I am not familiar enough with all of Ramamdan's work to say), but it is also a bit unfair. Methodological concerns have a legitimate, and not negligible, role in exegesis and application of traditional texts. It is indispensable, even if it is not enough (and who would say it is?) to remind people of what one would think is obvious but is all too readily forgotten: that interpretation is called for, that scriptures do not read themselves. The misapprehension that the meaning is plain that does not create fundamentalism, but correcting it can combat fundamentalism. A fundamentalism that must interpret, and defend its interpretations, is one that is no longer taken for granted. And that is no small gain.
Ibish then gives a mostly critical take on Berman's reading of Palestinian nationalism, which I don't feel qualified to assess. But as regards Berman's attack on Ian Buruma and Timothy Garton Ash, for (berman thinks) their failure to see through Ramadan, and especially what he sees as their craven abandonment of decency in their negative view of Ayaan Hirsi Ali, Ibish is very apt.
Berman views their negative evaluation of Hirsi Ali as symptomatic of a kind of Western liberal self-hatred, because he sees her as a champion of humanist and Western values, and more important, of the Enlightenment and its values. But Hirsi Ali is, alas, an anti-Muslim bigot.[...] When asked, “Don’t you mean defeating radical Islam?” She replied, “No. Islam, period.” She explained, “I think that we are at war with Islam. And there’s no middle ground in wars.... There comes a moment when you crush your enemy.” She concludes: “There is no moderate Islam. There are Muslims who are passive, who don’t all follow the rules of Islam, but there’s really only one Islam, defined as submission to the will of God. There’s nothing moderate about it.”This sort of talk does not really trouble Berman, though. As Ibish notes,
Berman asks, “What if it were true [that Hirsi Ali has been] hurling a few high-spirited insults at her old religion?” suggesting that such comments are somehow reasonable, understandable, or harmless,and most tellingly, this
soft-pedal[ing of] Hirsi Ali’s aggressive, intransigent, and intolerant attitude toward Islam and Muslims...is precisely what he accuses Buruma and Garton Ash of doing with Ramadan.Ibish has little to say about Berman's analysis of Buruma and Garton Ash's motives. He thinks Berman's accusations of cowardice (they must be afraid of extreme Islamism, and so avoid criticizing it) are unconvincing (why wouldn't they just talk about something else altogether?); he makes less criticism of Berman's charge that they are suffering from a kind of overwrought liberal guilt. He points out, as above, that it does not take such guilt to want to recoil from rhetoric like Hirsi Ali's, but grants that such guilt probably does characterize and motivate some westerners. What the right answer to this is is another question.
Berman may well have a good point about a certain type of Western liberal intellectual who fails to defend humanist and Enlightenment values in the face of presumed non-Western authenticity, but if he has gotten the diagnosis right, his prescription is no improvement on the disease.The latter half of Ibish's review is of Gilbert Achcar's book The Arabs and the Holocaust: The Arab-Israeli War of Narratives, partly as a corrective for what he sees as Berman's overemphasis upon Naziism in the formation of Arab attitudes towards Israel. But he subjects Achar's book, too, to scrutiny and critique. I haven't read Achar so don't have an impression of the fairness of his judgment (predictably, not all reviewers have been so even-handed in their assessments), but I commend the whole review (despite having a higher evaluation of Ramamdan than Ibish has) for the reading he gives of the history in question, and for his the measured and even tone, which manages even when calling a book seriously flawed to give it credit for being a serious contribution.
Wednesday, January 12, 2011
Nicola Masciandaro's latest post, a learned and queasy-making set of variations on Dune, Nietzsche, corruption, and words, all hangs together via the central trope of the worm. Unavoidably but perhaps impertinently, it recalls for me the legend of the schamir, a worm "the size of a barley corn" (I cite from Baring-Gould, Curious Myths of the Middle Ages pp 121-152), "but so powerful the hardest flint could not withstand him," whereby Solomon (who had to get the demon Asmodeus to harness the worm for him) carved out all the stones for the Temple. Baring-Gould notes that this story seems to have been brought in to provide a fantastic set of variations on the fact that by tradition the stones of the Temple were said to have been unworked, i.e., not shaped but rough and indeed left in their natural condition. This had been the case with the altar. Deuteronomy 27:6--
You shall build the altar of the LORD your God of uncut stones; and you shall offer on it burnt offerings to the LORD your God."Uncut" here is שָׁלֵם, ShLM, "whole." It stems from what is certainly one of the most important roots in the Biblical lexicon, giving us "completion," "perfect," and of course "peace." The symbolism of "uncut" or "unhewn" is sometimes rendered with the term "living rock," e.g. 1 Peter 2:5 --
You also, as living stones, are being built up as a spiritual house for a holy priesthood, to offer up spiritual sacrifices acceptable to God through Jesus Christ.This intersection between symbolism of temple/altar, and the body, has a very long afterlife in the Christian tradition. In the Orthodox Church an altar must have with it a relic from the body of a saint (sewn into the cloth, called the antimension, which covers it), precisely because the testimony of the Church is that sanctification extends to our bodies and is known thereby. The insistance of the Creed upon "the resurrection of the body" is linked to this. I can pass on some oral tradition on this point. "Thus have I heard," that in the Soviet gulag, the Eucharist was celebrated by both Catholics and Orthodox upon (or at least near to) the graves of recent martyrs, and even upon the living bodies of those who suffered or in some cases were even then dying for the Church. (Some of this is also documented in the book The Forgotten: Catholics of the Soviet empire from Lenin through Stalin by Christopher Lawrence Zugger.)
What I note here is the conflation, in the myth of the schamir of the sign of corruption (the worm of death) and the living rock. Numerous readings of this might be ventured. One could talk about the return of the repressed, the way death comes back even intto the stories of the building of the temple of the "God of the living." Alternatively one might see the schamir as the worm finally getting to do what in an unfallen world, a world without death, would have been its allotted task.
The mythology and symbolism of the worm very quickly opens up onto that of the dragon (O.E. wyrm), which meets up first and last with the Biblical serpent whose first conversation with humankind turns about hermeneutics ("Did God say...?"), death ("You will not surely die"), and gnosis ("...knowing good and evil"). In conjunction with Masciandaro's post I re-read the latest entry at a rarely-updated but always-worth-reading blog, Reflections From the Black Stone, on serpents in celestial mythology. There, Christoph de la Cruz reminds that the Bahir links the dragon with the pole star and this in turn resonates with any number of creation myths, the serpent around the roots of the tree of the axis mundi.(Compare, in Norse and Gernamnic myth, Níðhöggr gnawing on Yggdrasil, or Jörmungandr coiled around Midgard; or, in the Mediterranean milieu, the Pythian shrine to Apollo at Delphi where he killed the Python, and where one could see the stone omphalos or navel of the world.) Masciandaro too circles through the figure of the serpent bent around and devouring its own tail, an emblem of eternal recurrence, and concludes with Eriugena's comment on Psalm 22 (the psalm Christ cited from the cross), v. 6, "For I am a worm and no man,"
‘In what sense “no man”? Because he is God. Why then did he so demean himself as to say “worm”? Perhaps because a worm is born from flesh without intercourse, as Christ was from the Virgin Mary. A worm, and yet no man. Why a worm? Because he was mortal, because he was born from flesh, because he was born without intercourse. Why “no man”? Because In the beginning was the Word, and the Word was with God; he was God (Jn 1.1)’] It can also be understood thus: ‘I am a worm and a human is not,’ that is, I am a worm and human is not a worm. As if he were to say, I who am more than a human penetrate the secrets of all nature, as a worm [penetrates] the bowels of the earth, which no one participating only in human nature can do. With the sense agrees that which is written in another Psalm, ‘and my substance in the depths of the earth [PS 139.15], that is, and my substance, which is wisdom in itself, subsists in the depths of the earth, that is, the innermost folds of created nature. ‘For the divinity beyond being is the being of all.’ Thus the worm that penetrates the hidden things of all creation is the Wisdom of the Father, which, while human, transcends all humanity.” (Commentary on Pseudo-Dionysius' Celestial Hierarchy).But he begins with Herbert's Dune, in which the worms provide the spice melange that makes possible the bending of and burrowing through space itself. Do these tropes ever disappear?
postscript: Gypsy Scholar reminds us of the ashy fare of serpents in Milton's Hell,
and, speaking of the depths of the earth, The Ontological Boy reminds us that Parmenides had to persuade Socrates that dirt could participate in the Forms.
Saturday, January 8, 2011
Quentin Meillassoux's strange essay on Deleuze, "Subtraction and Contraction: Deleuze, Immanence and Matter and Memory," begins with a provocative thought-experiment: imagine that Deleuze's work had been almost entirely lost, and reached us only in fragments and quotations, like Heraclitus' or Pherecydes'. Meillassoux goes on to do a "reconstruction" of Deleuze based on a few quotations and some testimonia. Harman says somewhere that this is the work by Meillassoux he wishes he had written himself, and one can see how it would appeal to his quirky taste for counterfactuals. But the essay actually mirrors very well one of the ways I read thinkers, though I had not put it that way until I read it. As an attempt to correct (unsuccessful, doubtless) for my own tendency to pedantry, I have cultivated a certain casualness as regards reading philosophers-- which is to say, I don't hesitate to misread them. From a distance this might look like just a hangover from my undergrad days, when deconstruction was the rage and told us that there was only misreading. But in my case, it really is a strategy deployed against my own completism and tendency to second-guess myself. Of course I'm not so cavalier as to shrug off accuracy. But I look, really, for plausibility, knowing that I'm offering a reading and that there will always be another and another and another.
When it comes to a writer of whose work only a little is available, there really is no choice but to treat reading them as such an effort in plausible reconstruction. Yes, one can be careless or careful, but even the most painstaking scholarly effort yields a sort of self-portrait of the scholar, via the medium of the fragmentary evidence (which offers the same sort of "resistance of materials" as would the properties of paint on canvas, pastel on paper, or moulded clay). With Sappho or Parmenides, or "the historical Jesus," this is what you get.
But there are writers whose work is widely available and yet is accessible to me only in fragments, simply because I don't read the language. I am grateful every time Daan Verhoeven translates another one of his father Cornelis Verhoeven's essays. I deeply wish that Vladimir Jankelevitch's Traité des vertus and Philosophie première: introduction à une philosophie du Presque would be rendered into English (and I am very happy this has been done for Music and the Ineffable). Of both of these these thinkers, and many others, I have a very lopsided impression of their thinking, because I am dependent on translations. But I have at least read full books, and while the books may not be their "central" ones, not their magnum opus, one can at least gather something about the style of their thought.
There are other thinkers for whom the case is more dire. I know Sergio Quinzio's work only from a single essay and from his influence on Giorgio Vattimo and Ivan Illich. In this case, I am very much like a scholar squeezing every word for as much as it can yield, and almost certainly construing too much. I can't pretend I know Quinzio's position on anything; I can only see whether I can make responsible use of his influence, knowing full well it is mediated not just by the sources I have but by my own project. Of course I wish more (well, something) would be translated (especially Silenzio di Dio and La croce e il nulla). Until then, I have to make do with what I have.
This is why the essay I have been anticipating most of all in the finally-available anthology The Speculative Turn is Bruno Latour's chapter on Etienne Souriau. I'll make Souriau the matter for a separate post. But I am curious to know if anyone out there has any writers-- they need not be philosophers-- the translation of whom they consider a desideratum. (Chris Vitale at Networkologies just posted news that the translation of Gilbert Simonodon's works is underway.)
My reading of a thinker as a reconstruction, however, is not limited to those who I read in translation. There's an important sense in which I realize I am reading everyone that way. This is because I assume that the referent in philosophy is philosophical experience, and this experience--always imperfectly rendered into words--is (I say, following Gilson), essentially unified, or (in my own terms) indelimitable [from itself]. I take this to mean, with tremendous hubris, that insofar as they are philosophers, I can read Daniel Dennett and Albert the Great, Martha Nussbaum and Aurobindo, together. Yes, they do apparently contradict one another. And yet, I discern in all of them a striving toward clarity and articulation of something both intellectually and intuitively satisfying; I account for their differences in terms of a dozen things--prior commitments, errors (my own or theirs), the infelicities of language. I of course do not assume that they do "ultimately agree," nor even that insofar as they do agree they are all enlightening, but rather than insofar as they are all enlightening, they point me to a possible agreement beyond words (which they may or may not share).
I think in this I am something of an unregenerate Romantic, following F. Schlegel and Novalis (and, here at least, wanting to resist Badiou), seeing every philosophy as an incomplete and uncompletable task, but one that intuits a whole.
Tuesday, January 4, 2011
When I read this self-assesment--
my motivations for engaging in philosophy and theology [are] almost entirely negative: a view sounds wrong, or it makes no sense, or it’s boring, or I’m simply tired of hearing it so often, etc., etc.... I suspect I’m part of a significant majority--my first (unbecoming) response was to shake my head sagely and mutter something like, "Alas, yes."
For once, I managed to not comment in that mode. I don't really like myself when I give in to striking the curmudgeon pose; you know the one: "Yep, damn right so-called philosophers these days are just a bunch of infighting one-up-mansshippers. Used to be they cared about Truth." In my defense, I had just been reading this post by Arturo Vasquez, which leads off with a long quote from neglected Catholic philosopher Charles de Koninck:
The attitude of philosophers towards their readers has completely changed. It is no longer the truth they speak, but more rather the reader and the writer who become the principal object of their preoccupation.... What is still more important is that the reader for whom they write is no longer the philosopher, but rather that vague individual called the man of good sense on some occasions, the cultivated man on others, and the general reader on others. Compare that procedure with that of Aristotle or of St. Thomas. The Discourse on Method is essentially a rhetorical work. It was also one of the first appeals to unformed man precisely as he is unformed, an appeal which will some day shine forth in the appeal to the unformed masses insofar as they are unformed.
Philosophical works take on a form which makes them more and more unrefutable according to right thinking. They are rooted in attitudes. Philosophy becomes more and more the expression of the personality of philosophers.
On Vasquez' blog, I commented in part:
I too am ambivalent about what can be critiqued as the “elitism” in the passage. I am attracted to the conclusion that Cartesianism is symptomatic of a strong shift in the conception of what philosophy is, and I do see this shift in terms close to those de Koninck uses. On the other hand, I second-guess this sympathy of mine. Might it not just be my own elitist aspirations, lusting to identify with the master-discourse? And then the pendulum swings back: For heaven’s sake, man! (I say to myself), Have the courage of your convictions! Are you just going to kow-tow to the prejudices of the age?! But when I slow down a bit and take a breath, I reflect that this very internal dialogue with all its back-&-forth bears a certain resemblance (in a [post]modern key, no doubt) to, say, Socrates’ self-scrutiny.Now I think there's quite a lot in all of the stations through which that pendulum swings, and anyone who would follow the advice to know oneself had better take a good while at each. Like Leo Strauss, like Isaiah Berlin, like John Millbank, de Koninck points out a difference between the ancients and the moderns. I won't bother to argue the plausibility of this description, since the existence of some fault-slip between the ages at or about the time is after all why we call this age "modern", and was expressly acknowledged even at the time.
But there is a danger in identifying too strongly with the ancients, with those who (says de Koninck), cared for truth rather than persuasion, and addressed themselves to fellow initiates. We cast the whole rest of the world in the role of the sophist, the chatterer, the bullshitter, or at the very least, the parrots of today's consensus. At the same stroke, of course, one casts oneself as a member of the elite inner circle of those who know; and anyone who has spent a day in serious self-scrutiny knows to suspect their own motives in an us/them maneuver like this.
But this self-scrutiny fails if it just undermines one's own intuitions. Socratic attention is always first and foremost to one's own soul, but one mark of clarity of soul is to not be confused about the validity of questions. Even a bad motive for asking a question ("You ask that because you want to see yourself as a member of the inner circle of initiates"), does not render the question moot.
If one can conclude something like, "post-cartesian thought still aimed at truth, just a truth accessible to a broader group that had no need of specialized vocabulary," then one has modified one's original claim (only the ancients aimed at truth, the moderns aimed at persuasion), but managed to keep hold of the value one assigns to truth, and of the sense that there is a significant difference in style in the moderns. So far so good. But one can still ask: Yes, but in what way does this new, modern conception of truth bear comparison to the old one?
This is at least one question the Socratic approach would ask; something like, "We said, did we not, that Truth was [X]? But now we are saying that Truth is [Y]. Can it be both?"
In a later comment, Adam Kotsko wrote:
In the case of wonder, you get an intuition that there’s something higher or better. In annoyance, you get an intuition that your current state of affairs isn’t as good as it could be. How is this not just two sides of the same coin?My guess is that Socrates often got annoyed with himself for just the sort of reasons Adam lists in his initial post; and he certainly goes after his interlocutors who don't seem to notice that their views are either repugnant to reason, or decency, or both. But I notice that Plato almost always goes to lengths to depict Socrates as remaining unflappable, no matter how intractable the difficulties get (to say nothing of, say, being condemned to death), whereas his opponents get, well, annoyed. I am of course aware that we don't have to take Plato's programme for our own, let alone take his word for an accurate portrait of Socratic practice. Also, I note that it is Aristotle, not Plato, who calls wonder the beginning of philosophy (Plato is more about eros).
My own tentative stopping-place (for now) is: wonder trumps annoyance, and is the way that annoyance is transmuted into philosophical insight, rather than remaining at the level of merely loquacious tit-for-tat.
Sunday, January 2, 2011
Preliminary foray into the definition of and plausibility conditions for [the allegation of] bullshit, in one of the very first posts by my worthy opponent Thill Raghu at The Baloney Detective. This riffs of course on Harry Frankfurt's use of the term, previously developed independently by Fernando Flores. If you compare Thill's post to what follows, it will be clear that I concentrate more on the motive and less on the content of ostensible "bullshit" than Thill does. My post here is not a rebuttal or even a rejoinder to him-- some of this was even written before I read his post-- but he did provide the impetus to organize this into a semi-finished form. Read it, then, as a kind of complement to his.
Recently there were a few posts on Graham Harman's blog where he comments upon those who accuse him --both shockingly and naively, it seems to me-- of "not believing himself" the metaphysics he proposes. Harman responds (quite rightly) that to say this is to make an insinuation about ethics:
Before being right or wrong, people are either serious or full of sh*t. That is a basic distinction of human types, and...the basic fact of ethics.(As a side-note, notice the emphasis Harman places upon sincerity in his account of vicarious causality (e.g., here. Harman uses this word metaphorically, but its connotations are not accidental.)
Philosophers court this sort of accusation, though, and the reason is that there is a useful sense in which one can legitimately spin theories, or even experiential gedankenexperiments, and not believe them. Harman proposes this himself in a comment on Meillassoux's paper "Subtraction and Contraction;" and Rogers Albritton, former chair of philosophy departments at both Harvard and UCLA, says something like this too, in a remark cited here:
Philosophy, as [Wittgenstein] means to be practicing it “simply puts everything before us, [it] neither explains nor deduces anything” and it “may not advance any kind of theory” (Philosophical Investigations I 126, 109). Its aim is, rather, “complete clarity,” which “simply means that the philosophical problems should completely disappear” (ibid., 133). I’d like nothing better. Moreover, I believe it: the problems (at any rate, those I care most about) should indeed, as he says, completely disappear. That’s how they look to me. I love metaphysical and epistemological theories, but I don’t believe in them, not even in the ones I like. And I don’t believe in the apparently straightforward problems to which they are addressed. However, not one of these problems has actually done me the kindness of vanishing, though some have receded. (I don’t have sense-data nearly as often as I used to.) And if there is anything I dislike more in philosophy than rotten theories, it’s pretenses of seeing through the “pseudoproblems” that evoked them when in fact one doesn’t know what’s wrong and just has a little rotten metatheory as to that. (My emphasis.)Shahar Ozeri, whose post pointed me to this admirable anti-credo, remarks pertinently that this bears upon Meillassoux's post-metaphysical speculation, which also tries to reclaim the status of "genuine questions" for ontological inquiry, though some (and I do mean some) of Meillassoux's answers-- as I have asserted before-- border on the bathetic. Nevertheless, Meillassoux seems to grant the fuzzy boundary between speculation and, well, bullshit (though he calls it by its old-fashioned Greek name):
Philosophy is the invention of strange forms of argumentation, necessarily bordering on sophistry, which remains its dark structural double. To philosophize is always to develop an idea whose elaboration and defense require a novel kind of argumentation, the model for which lies neither in positive science--not even in logic--nor in some supposedly innate faculty for proper reasoning. (After Finitude, p 76-7)My own thinking tends to walk this line quite dangerously (though not at all in the same way as Meillassoux). I am constantly stepping outside my area of expertise (which is more or less being able to write and think passably in English); I offer interpretation of myths; I do historical reconstruction; I play (sometimes fast and loose) with science fringe and mainstream. I shamelessly lift things from anarchists and business consultants, Jacobins and Constitutional scholars; I hint that my readings of Laura Riding or Rousseau have something to do with schools without grades or being able to be both Christian and Buddhist. (O.K., maybe I haven't connected all these dots yet, but don't put it past me).
You bet this skirts close to the fire (perhaps a bit too close, Thill may think-- but I want to be clear that I don't accuse Thill of calling me a bullshitter). I think about this risk a great deal (e.g., here). I daren't guess (and there's no way of knowing) but I wonder whether Socrates, that great performance artist, ever worried about it too.
The bullshitter, as the one who is, not a liar, but indifferent to whether their utterances are true or false, is in some way the inverse of the poet (who "Nothing affirmeth and therefore never lieth"), because this indifference is not a sublimation in the service of something higher (and to which one must metaphorically extend the category truth), but a willful repression for the sake of something lower (reputation, career, getting the sex object into bed).
One of the greatest struggles I have, philosophically speaking, is wedding the seriousness of philosophy with the humility incumbent upon finitude. This constantly risks a kind of bullshit, as Albritton sees; one devotes a love to work one cannot ultimately believe in. (It is here that I'd locate the close kinship between philosophy and scientific method, which must also remain corrigible. But I think science makes things too easy on itself. As Wittgenstein also said, "Nothing is so difficult as not deceiving oneself" (Culture & Value 34e). I could go out on a limb here and speculate that one reason the siren-call of scientism is so seductive in our age is precisely because it distracts from this difficulty and this responsibility; and one motive of the Sokalesque animus against "Science Studies" a la David Bloor, Andrew Pickering, Bruno Latour, and others is that the latter approach directs us back to just this burden we had forgotten. This is a bigger piece of psychoanalysis than I would want to defend unmodified and half-baked, but I believe there is something to it.)
Given this preoccupation, I take quite hard the charge that I would willfully indulge in bullshit (as opposed to letting my conceits run away from me). People who throw around the charge of philosophical bullshitting have either no idea of the seriousness of what they are saying, or are casting real aspersions upon the motives of thought, or, in a very few cases, are offering one's conscience a salutary reminder. In the first case they are not to be bothered with; in the second, they should be called out. I can count those I trust to fall into the third case on my fingers.