Future, Present, & Past:
Speculative~~ Giving itself latitude and leisure to take any premise or inquiry to its furthest associative conclusion.
Thursday, March 25, 2010
So in my day job, I'm a teacher. I work with students, grades k-5, at an after-school program. Sometimes this is more or less glorified daycare. Sometimes it is homework club, or basketball coaching, or any of a dozen or so improvised activities, mainly initiated by the kids I work with. I've worked in the schools, first as an AmeriCorps volunteer, then as a district employee, then at the after-school program, for ten years, and I have a fair idea, not especially nuanced but I think realistic and informed, of some of the realities in an elementary or middle school in my city. I've broken up fights between students as big as or bigger than me, administered tests, tried to help struggling kids catch up, and seen more than one go from non-reader to reader. I've seen things that would make you cringe, and "successes" by some standards that could bring a tear to your eye. Most of the time I find the work exciting, sometimes exhausting, always deeply rewarding. It is certainly the happiest I've ever been at a job.
I do have occasion to talk philosophy to the kids I work with. I stumped a number of them (and myself) with Heidegger's question "What is a Thing?" (the rule was, they couldn't use the word "thing" in the definition), and walked one or two through Cartestian doubt up to the cogito. One time I had a four or five laughing a bit too loud at the back of the bus over the Euthyphro, which at least one thought was the funniest thing he'd ever heard. But for the most part, I don't really try out the canonical stuff on them; it's musty and smells of footnotes, and the last thing most kids want after school is more school.
We do, though, talk a fair amount about education itself, and its relationship with freedom, and power. Because I am constantly taking mental notes on how to be a better teacher, I pay a lot of attention to when I hear kids complain or enthuse about something they are doing in school. I listen to their accounts of what makes a teacher "nice" or "mean," fair or unfair; what makes something interesting or engaging for them, or bores them to tears. I get a lot of practical, hands-on tips from these conversations (I once had a ten-year-old boy confide to me, in real big-brother, lemme-tell-you-'bout-us-kids fashion, that "It's okay to be a little mean"); but what I want to focus on here is the more general impression I get of their impression of school. Not all kids are articulate or reflective enough to intentionally paint a picture of this, but every one of them knows very well that they aren't in school because they choose to be. They regard it the way most adults regard work: a necessary evil, the lesser-of-two perhaps, and often the devil they know. They each sense on some level that they are being made to do things, which they would never, ever decide to do themselves. What is heartbreaking to me is the way they internalize the notion that this is somehow a good thing.
Let me be clear; we aren't talking about the them's-the-breaks of life, or the tough-luck unfairness of circumstance, or rolling with the punches and playing the hand that's dealt you. No one likes to have to adjust their life to the realities imposed upon them by happenstance, but ten-year-old children know very well the difference between happenstance and a decision, and they know the difference between a considered decision and an arbitrary one.
Whenever a new activity is announced in my class, the first question I get is always "Is it mandatory?" This is quite striking considering that the answer is almost always "no." The things kids have to do in my class in the course of a year can probably be numbered on the fingers of one hand. Their reaction thus indicates to me that they are so beset by "things to do" [read: things adults want them to do] that at the first sign of another one, they brace themselves.
And yet. Though they know very well the feeling of being put upon, the kids I work with have all more or less accepted that this is for their own good; or at the very least, that it's Just The Way Things Are.
I also volunteer one day a week at the Clearwater School. Clearwater is a Sudbury school; it's run using an "alternative" model of education, based on (and named for) the Sudbury Valley School in Massachusetts. It's a radically student-centered mode of education in which children never. ever. take. classes. unless. they. want. to. There are no grades, and no age divisions (the five-year-olds and the fifteen-year-olds aren't kept rigorously separated or together); above all there are no rules that haven't actually been agreed upon by those who live by them.
These absences (no classes, no grade levels, no transcripts) are the things that stand out in people's minds when Sudbury education is explained to them, but the actual content of the model tends to pass them by. Sudbury education is radically participatory, radically democratic, and radically organic. Far from being little lord-of-the-flies centers where mere anarchy is loosed, Sudbury schools are communities that are run by the students, for the students. There are plenty of rules, but they are neither arbitrarily imposed from on high, nor artificially "decided on," as I've seen far too often in a traditional classroom, by a sham one-time meeting at the beginning of the school year when kids are manipulated into automatically mouthing and "agreeing to" the same rules they've lived with last year and the year before and the year before that. Above all, every student and teacher can vote on every issue affecting the school. This includes buying a new computer, refurbishing the music room, changing the rules about who can go off campus when, or hiring and firing of staff (teachers are re-elected to their posts every year).
The first day I volunteered there, I played a game of four square. I was never a big sports player in my own school days, and now that I'm at least a little more coordinated (and a little less invested in looking cool), I can finally enjoy this staple of the American playground. On the day in question, it took me a while to register that there was something different about the game. I couldn't put my finger on it. I was getting out with about the same frequency; I was playing no better or worse than usual. What was it?
Finally it dawned on me. It had nothing to do with how I was playing; it was that playing was all I was doing. I wasn't the ref.
At the public school where I work, if a dispute breaks out between kids over who is out, the immediate next step is to call my name. Whether or not I'm playing the game, whether or not I even saw the play, whether or not I know the kids involved, it's my job to make the call, as if by virtue of how tall I am. Have an argument? Where's the grown-up? But at this Sudbury school, though there had been a dozen or so close calls and disputes, not one kid had looked at me to resolve anything. Not even when one kid stormed off in anger did anyone so much as look at me as anything but another player. I should add that I knew all these kids already; they weren't unsure about me as a newcomer; it simply had never occurred to them that the adult in the group was the default decision-maker. My vote counted, but it was a vote, not a veto or an executive order.
No kid asks if they can go to the bathroom. No kid raises their hand before they get a drink of water. The notion that they ought to "wait till the bell" before eating the lunch they brought would be met with incomprehension. Bell? You mean, like Pavlov's dogs?
When adults hear about Sudbury schools, their initial question is likely to be "how do they learn anything?" In fact, it is not difficult to learn the rudiments of any educational competence. It takes approximately 100 hours for a motivated student to learn how to read, for instance; the real issue is waiting patiently for that motivation. (The Sudbury Valley school maintains that in over 30 years no student there has failed to learn to read.) What the question really reveals is a fear that the motivation will never arise; that left to themselves, children won't want to learn anything. It'll be too easy to just float. It doesn't matter that this is a surreally counterfactual fear.We've accustomed ourselves to not trust our kids. And they have met our expectations.
When kids first hear about Sudbury, their first reaction tends to be "Whoah." But it's not an unambiguously enthusiastic "whoah." Almost without exception, the public school kids I have talked to about Sudbury education have said, "that sounds really hard." And they're right.
At the school where I volunteer, there have been (among other things) music classes, French classes, cooking classes; kids pursuing Aikido, computer programming, film-making; writing and producing a play; caring for livestock. And yes, reading. Some learning to read; plenty of just plain reading. There are also lots of games. Computer games, board games, team sports, weird improvised invented mash-ups of basketball and softball and soccer, strung-together make-believe role-playing games that are really just long conversations.
What all these activities have in common is that they were all initiated by some student. At some point a child or a teenager approached a staff member and said, "I want to learn French" or "Will you teach me to play drums?" or "We should put on a play."
When the kids I work with say "That sounds really hard," this is what they are talking about. Every step of their education is up to them. It is hard. It is also, in my experience, indisputably more rewarding. Because everything a student formally learns is something they have decided to learn, what they internalize is far more than a degree of mastery over a "subject." They have learned that they can explore and that their exploration has real meaning and concrete results.
And the teachers? Aside from no-brainers like keeping kids safe (a task made markedly simpler by the Sudbury model's genuinely high trust in student responsibility), the teachers are there to pay attention to kids, to cultivate real relationships with them, a close real attention attuned to the actual interests of each one; to really be open to every request, and to make it happen when it's asked for. This might seem to multiply beyond control what a teacher needs to attend to--instead of teaching 5th grade math to 30 kids, I'm supposed to notice that he's interested in geology, she's into origami, they're asking about the civil rights movement, and that kid off at the other side of the playground is doing acrobatics? But in fact, working as a Sudbury teacher is far easier than teaching in a mainstream school. Aside from the absence of meaningless paperwork, every teaching encounter is fresh because it arises out of the actual relationship one has with the child. And, I ought also to mention, the lack of age distinctions means that children wind up teaching each other.
In contemporary mainstream American culture this model is so deeply counter to the widespread assumptions of our age, that it is not uncommon for people to refuse to consider a Sudbury school a school at all. I would submit that this critique might be better made of the enormous, and financially teetering, holding pens that our taxes fund primarily to free parents to work (so as to pay taxes), and to accustom children to surveillance and boredom.
Boredom. Ah, yes. Kids go through a lot of boredom at Sudbury schools--particularly students who have comes from a more structured school environment. It is constantly mentioned in the literature. The responsibility for one's own education is really just a subset of being responsible for one's life. There are big stretches of time when kids ask themselves what they feel like doing and come up blank. Of course this happens in a public school too, but there the boredom is rarely given much chance to last very long because the bell is always about to ring or the next subject is about to be taught. In fact, the very thing that cuts off boredom also cuts off interest--because you can't invest enough time to really get involved in anything when you've got to cover seven subjects in one day.
At an after-school program like mine, though, kids can get bored. The difference here is otherwise. I hear between two and ten complaints of boredom a week, I'd guess. I hear none at a Sudbury school. Kids get bored, to be sure--but not one of them assumes it is anyone's job but theirs to decide what to do about it.
I know that the picture I have painted could be disputed: too romantic, too Rousseauian, too naive. An excuse for lazy adults to do permissive teaching and spare-the-rod. Spare me. I'm a Platonist, but I'm an empiricist too, and I speak from experience. No, the kids I work with at the after-school program aren't miserable. They haven't had their love of life stamped out of them, or their creativity. This isn't because I've imported as many Sudbury-esque features into my class as I can adapt, but because the kids come from families who love them go to a school run by teachers who care, and because, well, they're kids. But little by little I see them accommodating themselves to a world whose guiding axiom--despite the loving parents, despite the caring teachers--is that they do not matter. This axiom is not foisted upon parents or teachers by evil men in a smoke-filled room; it's a function of the model of education as mass-production we've come to accept.
This long post on education is not an interloper or guest on my mostly-philosophy blog. I acknowledged an interest in contentious issues, and I know of little more likely to rile people than strong opinions about how to raise kids. But I'm not really trying to bait anyone here. My interest is philosophical. Philosophy has been about pedagogy from the very beginning, ever since Socrates got his famous double charge of not honoring the gods of the city and of corrupting the youth. From Plato's doctrine of anamnesis to Heidegger's remark that real teaching is letting-learn, education is the very essence of what philosophers do. Dewey remarked that "Education is not preparation for life; education is life itself." The examined life, I would add. And given the contrast between sitting in rows for six hours a day, and roaming around exploring the world however your fancy strikes you, I can't help but reflect further that, as Alphonso Lingis writes, the unlived life is not worth examining.
Monday, March 22, 2010
There's a good deal in the blogosphere currently on the mutual relevance (or lack thereof) between scientific and ethical deliberation. Some of this pertains to Sam Harris' talk at the TED conference, in which he offered a short precis of his upcoming book on how to ground morality in empiricism. This talk drew a lot of comment, especially from Sean Carroll at Discover Blogs; Harris responded to Carroll and then Carroll re-responded. I limit myself to links just to these two, but there are a lot of posts out there--you'll find them if you follow your nose--on either side of the question of whether we can surmount the is/ought distinction. Harris thinks this is a shibboleth we'd do well to have done with; Carroll thinks that it ain't going anywhere anytime soon. And that pretty much lays out the terms of the debate.
Well, yes, it's a little subtler than that, but I can't help but feel we're dealing not with arguments alone, but with powerful motives for arguing. The sides have not been chosen on the basis of merits; the merits have been picked out and presented on the basis of a choice.
I have some opinions on the issues Harris discusses, and I'll go on record that I find him far more engaging on this than in his anti-religion jeremiads; but for now I want to focus on a more specific instance: not whether "science can answer moral questions," what science says about how we answer them already.
According to research recently published in the Proceedings of the National Academy of Sciences, a strong burst of magnetism to the right part of the brain can make us regress, at least temporarily, a number of Kohlbergian stages.
O.K., that's really my editorializing gloss on it. "Regress" is a sort of loaded term, and the abstracts and reviews of the research I have read do not actually refer to Kohlberg. The magnetism is not really essential--it just happens to be the method the researchers used to put the brain's right temporoparietal junction, or TPJ (where the brain's parietal and temporal lobes meet), out of order for a short while. This is because they hypothesize that the TPJ is an important region of the brain for understanding other people's motivations. But the experiment did demonstrate that without a normally-functioning TPJ, the subjects tended to make moral evaluations in a decidedly younger style.
Given a basic narrative premise--say, a woman stirring something in to her friend's coffee-- which could then go in more than one direction ([a], the woman thinks it's sugar, but winds up poisoning her friend; or [b], she thinks it's poison, but winds up doing nothing more than sweetening the coffee with sugar), subjects evaluated the ethics of the situation. The test subjects, ranging in age from 18 to 30, were asked to rate how excusable they considered the woman's behavior, from 1 ("not at all') to 7 ("completely.") Under ordinary circumstances, most adults tend to view scenario [a] as an accident with no moral ramifications, and [b] as a grave situation in which the woman is blameworthy despite nothing unfortunate having actually happened. However, following a transcranial magnetic stimulation (TMS) disrupting activity in the right TPJ, test subjects tended to evaluate the stories somewhat differently. They did still consider unsuccessful murder more serious than accidental killing; but "subjects were significantly more forgiving of attempted murder when their right temporoparietal junctions were knocked out by TMS than when they were functioning normally." That is, they seemed to think that the results of the scenario mattered more; whether or not anyone got hurt figured larger in their considerations, regardless of the motives of the actors.
Liane Young, of M.I.T., one of the paper's authors, notes that the TMS-affected subjects were exhibiting a style of moral evaluation more often seen in three- or four-year-olds. As the National Public Radio story on the research mentioned,
Studies show that at this age, children will usually say a child who breaks five teacups accidentally is naughtier than a child who breaks one teacup on purpose, [Young] says. That's probably because their brains are still developing the ability to understand the intentions of other people.In other words, young children tend to consider consequences, which they can understand, more than the motives which they are not yet equipped to grasp. I thought of this recently when commenting on Love of All Wisdom, where Amod has a post up about consequentialism, in connection with telling oneself lies. Can one, he asks, make a moral case for holding a false belief on the grounds that believing it offers a pragmatic advantage? This question arises, for instance, in conjunction with the issue of depressive realism which I mentioned before: how should we evaluate the will-to-accuracy that science (for instance) exemplifies, if one of the fruits of that accuracy is the conclusion that accuracy is best served by pessimism, whereas optimism serves one's likelihood to live and live well?
Back, however, to Young's research. Seeking comment, NPR went on to ask Joshua Greene of Harvard University, who offered his own take. NPR reported:
This last remark is the sort of thing for which the phrase "non sequitur" was invented. With all due caveats about getting one's science from journalists, I can't help but reflect that "a mechanical explanation" of morality is pretty far from anything even remotely suggested by the study in question. What it strongly suggests, of course, is that the normal function of this region of the brain is part of the usual way human beings discern and evaluate other people's motivations. This is very similar to Greene's own researches, which are very suggestive about what physical systems in the brain are involved in moral evaluation, and perhaps why some moral questions are more difficult to resolve than others. In the case, for instance, of the well-known "crying baby scenario," in which you are offered the hypothetical choice of either smothering the eponymous baby, or failing to and attracting the attention of a murderous death squad upon a whole roomful of hiding refugees, Greene sees both
"Moral judgment is just a brain process," [Greene] says. "That's precisely why it's possible for these researchers to influence it using electromagnetic pulses on the surface of the brain."
The new study is really part of a much larger effort by scientists to explain how the brain creates moral judgments, Greene says. The scientists are trying to take concepts such as morality, which philosophers once attributed to the human soul, and "break it down in mechanical terms."
If something as complex as morality has a mechanical explanation, Green says, it will be hard to argue that people have, or need, a soul.
an emotional impulse to think it’s wrong to smother the baby, as well as a utilitarian impulse to weigh the number of deaths with each possible outcome. Moreover, different parts of the brain are at work in the emotional and utilitarian case.Assuming that this could be demonstrated in some watertight way, what exactly would have been demonstrated? Well, that when we make moral evaluations, we use our brains, and not always the same part of our brains. Indeed, no amount of scientific casuistry could ever come within spitting distance of telling you whether to smother the baby. It can only say what is happening while you consider the question.
Greene seems to sense that this denouement is a trifle bathetic. But then, it is not really the how of the brain's functioning that interests neurophilosophers like himself, he says in this paper; it's the fact of it itself:
What we really want, I think, is to see the mind’s clockwork, "as clear and complete as those see-through exhibitions at auto shows." ...the promise of useful applications is not what fascinates us. Our fascination is existential. We are hooked on the idea of understanding ourselves in transparently mechanical terms. But a strange feature of this impulse to see the mind’s clockwork is that, so far as this impulse is concerned, the clockwork’s details are almost irrelevant. We don’t care how it works, exactly. We just want to see it in action.But then again, it isn't just the fact itself; it's also a certain sense of what that fact means:
Officially, we scientists already know (or think we know) that dualism is false and that we are simply complex biological machines. But insofar as we know this, we know this in a thin, intellectual way. We haven’t seen the absence of the soul. Rather, we have inferred its absence, based on the available evidence and our background assumptions about what makes one scientific theory better than another. But to truly, deeply believe that we are machines, we must see the clockwork in action. We’ve all heard that the soul is dead. Now we want to see the body.This is an admirably candid declaration. It is worth bearing in mind that it is precisely a programme that is being described here, and not a set of conclusions. No amount of research could ever demonstrate the absolute reduction of persons to being "simply complex biological machines," and Greene does not here aspire to demonstrating it. It is not a proper object of attempted demonstration; it is a motive.
I think it's a motive that Sam Harris shares, and it bears underscoring that it's not got much to do with evidence per se.
I say nothing here about the rightness or misguidedness of this motive. What I am fairly sure of is that (1) my brain is doing a lot of work when I evaluate it, and (2) if my right temporoparietal junction were knocked out by a magnetic field, a drug, or a tire iron, this would not have anything to do with how I ought to evaluate it; that is, on whether the metaphysical picture it depends upon is true.
Sunday, March 21, 2010
It is necessary to understand
That a poet may not exist, that his writings
Are the incomplete circle and straight drop
Of a question mark
And yet I know I shall be raised up
On the vertical banners of praise.
Ern Malley, “Sibylline,” The Darkening Ecliptic
The philosopher Alexius Meinong is probably most famous (at least among people who know who he was at all) for having held that there are such things as non-existent objects; for instance, the Easter Bunny, the exiled king of Zembla, a Euclidean method for trisecting the angle. Meinong’s teacher Brentano had taught that mental acts—believing, intending, thinking, imagining—are all by their nature directed towards something. One does not promise “in general;” one promises to—. This is the famous "intentionality" of which phenomenology made so much. Well, philosophers always want to go one better. Heraclitus says you can’t step in to the same river twice, Cratylus says you can’t step in even once. Meinong went one better, in good consistent fashion, arguing that since a perpetual motion machine or a ten-sided triangle are all things we can think about, they must have some sort of being, or such thoughts would be meaningless and nonsensical. A thought of a perpetual motion machine or a ten-sided triangle must each be thoughts of something—namely, nonexistent objects; thus the perpetual motion machine is an object, the functioning of which provides its own energy, but which does not exist; while the ten-sided triangle is an object that has ten sides, is triangular, and which could not exist.
All these objects are often said to exist in various possible worlds, or for short (since not all these worlds are compatible with each other, so their proliferation is endless), in “Meinong’s jungle.” It should not take a great leap of imagination to see how my earlier exposition of the “worlds” of a poem might be amenable to being mapped onto this jungle. Poets might thus be justly expected to rejoice in this ontology, at least once they got over the initial annoyance of having logicians diagramming their poems formally.
But not everyone shared the excitement for this profusion, least of all among philosophers. The desire to prune this jungle—not to say clear-cut it—gave rise to the development of analytic philosophy, especially as informed by the Fregean distinction between sense and reference, and by Russell’s contrasting theory of description. That is, the reaction against the fecundity of Meinong’s jungle led directly to the flourishing of the linguistic philosophy which Speculative Realism now derides as so much nail-filing and mere epistemology.
It is true that Meinong’s enthusiasm for objects of all sorts brings with it certain liabilities. When we countenance ten-sided triangles or weights so heavy that omnipotent beings cannot lift them, or this naked man who is wearing a tuxedo, or a pipe that is not a pipe, we find ourselves, as Russell drily remarked, “apt to infringe the law of contradiction,” and to flaunt that of the Excluded Middle. Since it is a commonplace in logic that from a contradiction, anything and everything can follow, these objections need answers if we are not to be expelled from this paradise which Meinong has created.
One way is to decide that contradictions, or at least some contradictions, aren’t so bad—an option I’ve mentioned as a version of the two truth theory. But there are other ways that attempt, by formulating relevant distinctions, to preserve the intuition Meinong had. One of Meinong’s students outlined such an attempt with a distinction between being determined by a property (or as he called it, an “objective,” because it characterizes an object), and satisfying it:
“Form-determiniates” are conceptual objects, determined by properties (“objectives”) but not necessarily instantiating or satisfying the properties that determine them.
This distinction can be expressed more simply as exemplifying and encoding, respectively. (This updated terminology is owed to Edward Zalta). To exemplify a property is simply to fulfill the normal relation with it we express in predicative logic: “The cat is on the mat” and “The cat has eaten the canary” mean, respectively, that the cat exemplifies the property of being on the mat or having dined upon the canary. But for the fictitious cat, say the Cheshire Cat in Through the Looking Glass, one needs a different sense of the words is or has. The sense in question is encoding; the Cheshire Cat encodes the property of being a cat, of grinning, of being able to vanish, and so on, but does not exemplify them. In like manner, John Keats exemplifies the property of being a poet, whereas, in the fragment cited at the top of this post, the “poet [who] may not exist” encodes this same property.
So, if you’ve been long-suffering enough to make it this far, we’ve come back to poetry. I want to offer this modified version of Meinong's ontology of nonexistent objects as a way of talking about the impact poetry has on us.
The attempt to “whistle,” as Ramsey called the early Wittgenstein’s efforts—to suggest more than can be said; to evoke experience rather than (impossibly) describe God in words; to “eff in ineffable,” as Rorty said (if anyone knows where Rorty got this I’d be glad to hear)—leads potentially to all sorts of problems in poetry, among them especially a burgeoning of connotation and vagueness or of imagery and sound to the expense of a coherent statement. Every time you hear a poem denounced as obscure, this concern is in play. Suggestion and association abound; straightforward descriptions of actual events, it is complained, are few.
These denunciations have something to do with the unease I referred to earlier about discussing poetry, an unease which bears comparison with discussing religion. In a discussion about religion one soon discovers that one is not dealing with assent to propositions but with entrenched positions that orient one’s whole life, strenuously propounded and grounded in intense emotional investment. This is not infrequently the case whether one speaks of belief or unbelief. One opens oneself up in such talk to being met with incomprehension, condescension, derision, and revulsion.
I mentioned this in regard to the so-called “science wars;” religious believers are frequently in the uncomfortable position of being regarded as quaint or crazy or stupid. (N.b. this happens just as frequently between believers, as between believers and nonbelievers). But there is an analogous regard when it comes to art. It is easier to tell your friend, perhaps, that you don’t share their taste in music or film than that you think their religion is kooky; perhaps also easier than to tell them you think they threw their vote away on a demagogue or a tool of special interests. But for those for whom art is a religion of its own, or who understand that the two as closely related, an artistic squabble is every bit as worthy of going to the wall as a political fight.
Poets have been faulted for “difficulty” long before MacLeish tried to ward off the charge of obscurity by claiming famously that “a poem should not mean but be,” and the charge has frequently come not from the philistine public but from other poets. Coleridge called the poetry of John Donne “meaning’s press and screw;” Robert Graves once offered a monetary reward to anyone who could satisfactorily explain to him a particular set of lyrics of Dylan Thomas’; the Language poets were (and still are) routinely denounced as getting away with something and calling it poetry. Exactly what, such critics want to know, are lines like this poem, from Leslie Scalapino’s , “Chameleon Series” supposed to mean?
Or these lines, the first verse of “Rich in Vitamin C” by J.H. Prynne:
delivers truly the surprise
of days which slide under sunlight
past loose glass in the door
into the reflection of honour spread
through the incomplete, the trusted. So
darkly the stain skips as a livery
of your pause like an apple pip,
the baltic loved one who sleeps.
One could multiply examples ad infinitum. The one I’ll use (it’ll become clear why) happened in Australia, during World War II. The avant-garde was late in arriving down under, and in the ’40s, Australian literary culture was split between those who wanted, and believed in, a poetry that held to the norms of English verse from Chaucer through Dryden to Yeats, and on the other hand, partisans of modern trends in poetry like the Surrealism that had swept through Europe two decades earlier. In the first group were poets like A.D. Hope, James McAuley, and Harold Stewart, poets whose work was careful in its craft and attentive to traditional themes. Against these, the quarterly Angry Penguins championed the work of experimental poets willing to try free-association and free verse, in particular Max Harris (who edited the journal), D.B. Kerr, Paul Pfeiffer, Geoffrey Dutton, and above all, Ernest Lalor Malley.
Ern Malley was the author of a single work, The Darkening Ecliptic, a sequence of sixteen poems that exploded on the Australian literary scene in 1943. (The poem I cite at the beginning of this post is from it). Malley himself, a figure tailor-made from the mythology of the Romantic poet, a kind of cross between Chatterton and Rimbaud, had died some months previously, of Graves’ disease, unpublished and unheard-of; his sister Ethel had forwarded the poems to Max Harris at Angry Penguins when she had discovered them upon going through his belongings. The poems show the unevenness of young work, but even now one can recapture something of what must have moved Harris, when first reading them, to feel he was discovering an unsung genius:
It was a night when the planets
Were wreathed in dying garlands.
It seemed we had substituted
The abattoirs for the guillotine.
I shall not forget how you invented
Then, the conventions of faithfulness.
It seemed that we were submerged
Under a reef of coral to tantalize
The wise-grinning shark. The waters flashed
With Blue Angels and Moorish Idols.
And if I mistook your dark hair for weed
Was it not floating upon my tides?
I have remembered the chiaroscuro
Of your naked breasts and loins.
For you were wholly an admonition
That said: “From bright to dark
Is a brief longing. To hasten is now
To delay.” But I could not obey.
Princess, you lived in Princess St.,
Where the urchins pick their nose in the sun
With the left hand. You thought
That paying the price would give you admission
To the sad autumn of my Valhalla.
But I, too, invented faithfulness.
When Harris, who edited Angry Penguins, published the poems, they created a scandal, the sort of thing that supposedly happened back in the heady days when Stravinsky’s and Nijinksy’s Rite of Spring debut caused riots in Paris. It is hard to imagine.
Part of the scandal, though not the most interesting part, had to do with content: the poems were deemed lewd, indecent, immoral, and very nearly blasphemous. Official obscenity charges were filed. The police confiscated the entire edition of Angry Penguins that had not sold out. At issue were lines like “the chiaroscuro / Of your naked breasts and loins,” above, but also “Part of me remains, wench, Boult-upright / The rest of me drops off into the night,” or “The body’s a hillside, darling, moist / With bitter dews of regret. / The genitals (o lures of starveling faiths!) / Make an immense index to my cold remorse;” or finally, “There is a moment when the pelvis / Explodes like a grenade.”
Some of these lines are good, and some don’t quite work; and indeed, as Michael Heyward notes in his history of the episode, The Ern Malley Affair, “Though it was never stated with such clarity, the Crown case seemed to be that where the poetry was not obscene it was unintelligible, and that was almost as bad.”
But, adapting the modified Meinongian ontology of nonexistent objects, we can perhaps say that a poem—and not only a poem—evokes a world in which the discourse of the poem is a meaningful utterance. In which, for instance, the line that opens Malley’s “Egyptian register”—
The hand burns resinous in the evening sky
—rather than being just gorgeous or jarring nonsense, is a full bearer of meaning. It does not matter, for our purposes, just how this meaning is construed—a metaphorical hand of plumed clouds at sunset, or a victim’s severed limb falling from a wrecked airplane, or some dreamy yet-unthought-of sense—what matters is that the poem unfolds as an invitation to believe in a possible world in which the even most nonsensical-sounding phrases are legitimate moves in the language. ’Twas brillig, and the slithy toves did gyre and gimble in the wabe. “So the Father is God, the Son is God, and the Holy Spirit is God. And yet they are not three gods, but one God.”
What I call metalepsis is the passing through the semi-permeable borders between worlds, including the “real world” and its images in discourse. From the world of Dante to that of Darwin; from that of the Abhidharma to that of the Athanasian Creed.
Often such a move is made by our own initiative. It is, after all, we who write poems. But sometimes reality teases us (we feel) with a trope of its own.
Part of the scandal of Malley’s poetry, I mentioned, was its content. The other part, the more interesting part, was its form, or indeed, its very existence. Malley’s line concerning the poet who “may not exist,” with which I illustrated Mally’s account of nonexistent objects, refers of course to himself. For both Ern Malley and his sister Ethel were fictions, invented by the anti-modernist poets James McAuley and Harold Stewart one afternoon in 1943, a slow Saturday in their office at the land headquarters of the Australian army, where they were uniformed noncombatants. They had done it in order to send up the conceits of modernist poetry and have a laugh at the expense of Max Harris. It was, in short, the poetic equivalent of the Sokal Social Text hoax. As soon as Angry Penguins hit the newsstands, rumors leaked out. Eventually Stewart and McAuley issued a press statement claiming responsibility for all the poems, which, they said, were entirely without either meaning or merit. Neither Stewart nor McAuley were named in the obscenity charged—writing smut was not a crime, but publishing it was—and neither of them commented upon the trial until afterwards (Harris had been found guilty and fined £5), in part to say that the legal coda had been no part of their intention. That intention, they maintained for the rest of their lives, was part jest and part serious: to demonstrate that literary fads could blind intelligent readers to questions of quality.
They may have succeeded too well. Many readers, not only Harris but Sir Herbert read at the time, and later poets like John Ashbery, have held that, whatever McAuley and Stewart’s aims, Ern Malley had succeeded in writing genuine poetry of real quality. From England, Read cabled Harris in support: “I too would have been deceived by Ern Malley but hoaxers hoisted by their own petard as touched off unconscious sources of inspiration too sophisticated but has elements of genuine poetry.” This explanation Harris maintained to the end of his days.
Neither Stewart nor McAuley would ever achieve the acclaim or notoriety of Ern Malley for any subsequent work. Neither of them ever tried to repeat the experiment; neither recanted their artistic creed that modernism was bunk. They remained traditionalists: McAuley converted to Roman Catholicism and eventually wrote texts for a number of hymns; Stewart became an authority on his adopted nation, Japan, and on Buddhism. I have read a good chunk of their later work, especially McAuley’s essays and Stewart’s long poem By the Old Walls of Kyoto, and am fairly sure that the relative neglect it suffers is not on artistic grounds but simply a function of the way the hoax acts as a strange attractor for our attention. Still, there is no denying the fascination Ern Malley exercises, both as a character and as a poet. And indeed, Stewart had read a good deal by Jung, and his 1950s volume Orpheus and other poems shows the influence. Maybe Read’s theory that the hoaxers had “touched off unconscious sources of inspiration” bears some weight.
Many questions are raised by the Ern Malley hoax: what is the nature of poetic quality? How do we know? What exactly is the role of authorial intention in it? Can a poem be meaningless and still excellent? These questions have been with us a long time. But for me the strangest aspect of the affair is not the meaning that Harris got out of the words that Stewart and McAuley threw onto the page as they chanced to cross their minds; it’s simply Ern Malley’s name.
Many suggested etymologies have been offered for “Ernest Malley.” Ernest, because the hoaxers were not, because of Oscar Wilde’s Importance, because of the pun on “earn” and the fact that they felt Harris had earned this comeuppance. “Malley” from the French mal, bad, with a whiff of Baudelaire, or from malleefowl, an distant Australian cousin of the chicken, or from melee for their polemical intention. What I am almost certain of is that they were not referring to Ernst Mally.
Who? Ernst Mally. The student of Meinong who proposed the distinction we saw above, which we used to say that Keats exemplifies the property of being a poet, whereas Ern Malley encodes it. Ernst Mally, who more than any other student of Meinong worked to elaborate a grammar by which we could speak about nonexistent objects, like the nonexistent poet, with (almost) the same name, of whom he never heard.
It’s a different Jungian concept that comes to mind now: synchronicity, the name Jung had for “meaningful coincidences:” the precognitive dreams, the phone call from your old schoolmate on the first day you think of him for in years; the wedding invitation that goes astray only to turn up the day of the funeral; the twins who die in separate accidents in different towns at the same hour. Or the non-existent poet who shares a name with the philosopher of non-existent things
Now I know all about confirmation bias and Littlewood’s law the the Law of Truly Large Numbers. I understand that there are skeptical rebuttals to any assertion of “meaningful” coincidence. But everyone has instances of synchronicity that for whatever private reasons strike them before they can marshal their skepticism—probably because these incidences have to do with their personal interests. Mall(e)y is one of mine.
In 1943, while Stewart and McAuley were writing the Malley poems in their barracks office, Mally was a retired professor in Germany. By the time the obscenity trial got underway in late 1944, Mally had died. His later work has been deeply criticized for incorporating and attempting to justify Nazi ideology concerning the German Volk and its metaphysical opposition to degenerate values, and very little of his philosophical work has been translated into English. If either of Ern Malley’s creators ever heard of him, I have not been able to find out. Well before Mally fell under the Nazi spell, Meinong’s jungle had retreated in significance before the Vienna Circle’s slash-and-burn, and only since the ’80s has Mally’s thought really begun to receive serious attention in the English-speaking world. (The one exception I know to this is the philosopher John Findlay).
Synchronicity is sometimes read in terms of the Trickster, the wily figure from many mythologies who always turns the tables, an ambiguous character who is not on anyone’s side but sometimes functions as a deus ex machina, even for the gods.
Malley was a trick, a joke on the joke his makers thought modern poetry was. Too good a trick, Harris thought—better than Malley’s inventors knew. I have to admit that when I consider Mally and Malley, I have to wonder if it isn’t a lot better trick than Harris knew too. I haven’t a clue what it “means,” and most of the time I assume it means nothing; that it’s “just one of those things that happen from time to time.” Which is shorthand for, “What the hell am I supposed to do with that? Don’t bother me.”
But sometimes, reading ontology back-to-back with the surrealist poetry, I feel the hairs on my neck stand up. (You try it!) Then I’m bothered anyway. I can’t help but wonder, if only for an instant, whether Coyote or Loki or Anansi, some mischief-making Trickster of myth, hasn’t slipped into the sacred precincts of metaphysics. For a moment I think I’ve glimpsed some fleeting way of whistling “Mall(e)y,” as one might whistle “The hand burns resinous...” or “ ’Twas brillig…” It feels, to me, like the real world were metaleptically sneaking through into the invented jape. Or, vice-versa: as though reality itself were having a joke and writing a poem.
Not necessarily in that order.
[Addendum: Back when I first made the mental Mally/Malley association, some twelve or thirteen years ago, though I did not assume I was the only person to notice it, I could find no published mention anywhere. However, since writing this, I chanced on this article by the late David Lewis, formerly a professor of philosophy at Princeton. Lewis speculated that McAuley might have read Findlay's book Meinong's Theory of Objects, which does treat Mally at some length. Doubtless it is a more parsimonious explanation than is the notion of Loki or Hermes briefly taking up residence in Harold Stewart's typewriter. In any case, Lewis agrees that the notion that pure chance accounts for the coincidence "strains credulity." he acknowledges, however, that his reconstruction is speculative. While hoaxers do generally seed their work with clues, and Stewart/McAuley are known to have done so, I am still unaware of any evidence that either of the "real authors" of The Darkening Ecliptic were aware of Ern Malley's namesake. And besides, Meinong's Jungle abhors parsimony.]
Monday, March 15, 2010
Richard Crary’s post nicely dovetails with some thinking I’ve been doing on the way arguments unfold over science online.
In the SCT post that Crary references, I wrote that “The philosophical problems that interest me are the ones that are most contentious,” so I can’t very well let the last hullabaloo over “Science” and “Religion”, those two chimeras, pass without some remark. (Disclaimer: in what follows, I will be throwing both of these words about like they were going out of style, which I deeply wish they would. They are very broad umbrella-terms with a baker’s dozen definitions. I admit that there’s something sloppy about using terms I deem problematic, merely for the sake of a brevity I won’t achieve anyway. But it’s tiresome to constantly say, “a certain kind of…” And in any case, some of the positions I’ll discuss actually deny that there is much pertinent substantive variation among religions or that science is as various as that (for the record, I think it’s less so than “religion” myself). I will try to put in the qualifiers when they are relevant).
dy0genes recently wrote (and I largely agree) that scientific ethics, or rather, the ethics of science, should arise out of the practices of scientists themselves. The debate here shows a beautiful and elementary case study—elementary because it hasn’t anything to do with embryos or vivisection, but with the plain old question of whether and how to be nice to each other. In particular, to someone with whom you disagree, or even who you think is a fool.
The latest round of this perennial dispute seems to have arisen over a book. A usual enough occasion, you might say, but in this case the book has not been written yet. Chris Mooney, author (with Sheril Kirshenbaum) of Unscientific America and the blog The Intersection, was lambasted upon announcing his receipt of a Templeton Foundation fellowship to write about the relation between science and religion. Mooney, who I gather is an atheist, is frank about his conviction that science and religion can be compatible (depending mainly upon the kind of religion), and also about his criticisms of those who take a hard-line incompatibilist stance. The Templeton Foundation, too, is frank about its compatabilist stance on the question. One would have thought, then, that the award would not have raised many eyebrows, but Mooney’s site was drenched in incivility. Not waiting to denounce the book, incensed commenters denounced Mooney himself. (He was characterized, for instance, of having accepted a $15,000 bribe in exchange for his journalistic integrity.) Inevitably, the blog cascade effect took over. The debate has moved on to the twin questions of whether (1) science and religion really have, or do not have, irreconcilable differences, and (2) what to do in either case. I’m interested in these questions, of course, but even more so in the manner in which they are debated.
I think there are roughly three camps here, with some overlap between the second and third. I’ll try to lay them out as I see them: Incompatiblists, Compatiblists, and (between these) Accommodationists.
In the Incompatiblists’ corner, Jerry Coyne, P.Z. Myers, Larry Moran, and the names you’ve all come to love, Richard Dawkins, Daniel Dennett, among others. (Note: unless I am referring to a particular post, I am linking only to a blog's front page. I encourage you to search through and look around on all of these sites.) Their position, stripped down to its essentials, is: here is science—peer-reviewed, repeatable, mathematized, rigorous, skeptical, falsifiable, prediction-making science; and here is superstition, it matters not of what stripe—transubstantiation, reincarnation, bilocation, you name it. Now choose, because you can’t have both. Thus the aforementioned tend to be extremely impatient with religious believers; not merely with those who hold that God made the world some 6,000 years ago working overtime for six days, but perhaps even more those who are perfectly happy to say that God outsourced the whole shebang through Darwin. To the Incompatibilist, such theistic evolutionists are just victims of their own compartmentalism. They may, it is conceded, do good science. They may also perform valuable service by lobbying against bad science. But then they go and spoil it all by also holding some damn fool notion like, oh, “God exists.”
In the Compatiblist corner, besides big guns like Templeton, you have people like NIH director Francis Collins (former head of the Human Genome Project), Ken Miller, professor of biology at Brown Universtiy (and expert witness against Intelligent Design); and, in the blogosphere, John La Grou at microclesia, Amod Lele at Love of All Wisdom,or the BioLogos foundation (not all of these folks have commented, so far as I know, on the most current controversy—though Amod did have a great post on it recently; I’m just including them to give a sense of some Compatibilist positions). Note, one need not hold that science supports any religious position at all in order to be a Compatibilist. It is not necessary to believe that quantum physics demonstrates the truth of Vedanta or the plausibility of transubstantiation. One might, for instance, hold that religion treats ethics and existential questions about meaning, that science treats empirical questions about the natural world, and that there is no need for the one to step on the other’s toes. Gould’s Non-overlapping Magisteria proposal is one example of Compatibilism. (I am dubious about NOMA as a fine-tuned strategy, for the details get messy, as I’ve argued before).
Now, you’d think these two camps might be able to join forces against the young-earth creationists or the Intelligent Design champions, and indeed, they make common cause; but they also fight in a manner that is sometimes shocking to behold. Folks like P.Z. Myers strongly object to this being said, but my unscientific impression is that the real incivility tends to come more (not always, but more) from the Incompatibilist camp. If you think about it for ten seconds, this readily makes sense: Compatibilists hold that science is sane, and religion is sane, so there is nothing inherently denigrating about a Compatibilist’s view of an atheist. (Of course a Compatibilist can hold that an atheist is somehow foolish or worse, but that is what you might call “an extra step that doesn’t have to be there.”) On the other hand, let’s face it: the Incompatibilist by definition holds that all religion is unfortunate and mistaken at best, stupid and wicked at worst.
So it stands to reason that some of that “you’re just silly,” “you’re superstitious,” “you're lying to yourself,” “you idiot!” attitude must leak through from time to time. And online manners being what they are, things often move past this, and the Compatibilists are fully capable of firing back in kind.
But what really strikes me is that the Incompatibilists’ greatest gripes—a fury reserved for traitors—is aimed (again, my subjective impression—I’ve done no statistical investigation) not at Compatibilists, but at those few hardy nonbelievers who have ventured into no-man’s-land: e.g., Josh Rosenau, John Wilkins, Michael Ruse, Chris Schoen, and John Pieret, and of course, Mooney and Kirshenbaum. These are the Accomodationists, mostly agnostics or atheists, who have the integrity or the gall (depending on who you ask) to admit/allege that religion and science may be (given the right definitions of each) compatible. Not are, but may be. You don’t have to insist upon the compatibility; what is essential to the Accommodationist case is that the necessity, in every instance, of conflict has not yet been demonstrated. Accommodationism is, moreover, a tactical stance; it seeks to further the cause of either science or religion (almost always the former), by avoiding unnecessary fights. Some Accommodationists stress the philosophical aspects, some the strategic; but especially the latter leads some of its opponents to suggest that Accommodationism is really just Incompatibilism without the courage of its convictions.
This may be the case in a few instances, but there is obviously nothing necessary about it (to say nothing of its being a wholly separate issue from the substantive one of whether the science/religion incompatibility has been demonstrated). To be an Accomodationist does not make you a mealy-mouthed apologist for Creationism. There is such a thing as a necessary fight, and any scientifically-minded Accommodatoinist has the ditch he or she is willing to die in. For instance, all of the aforementioned Accommodationists (and indeed the Compatibilists too) have voiced explicit and sometimes exasperated criticisms of theories of Intelligent Design.
(Of course, most partisans of I.D., for instance the Discovery Institute, might consider themselves compatibilists as well; but here we are speaking of “mainstream” science. The I.D. camp I would consider a separate entity, which goes beyond claiming compatibility; in its eyes, science (meaning, the science it approves of) actually supports religion. I’m at a loss for what to call this camp; I thought of “Appropriationist,” but it’s too tendentious. Suggestions are welcome.)
Now as I said, what interests me is the contentiousness of this debate. The acrimony on these blogs is loud, rank, and above all, perplexing. For instance, Michael Ruse has written a rather harsh denunciation of Alvin Plantinga, Thomas Nagel, and Jerry Fodor for giving aid and comfort to Intelligent Design. He is, in turn, denounced by P.Z. Myers for giving aid and comfort to… Intelligent Design. This leads Ruse to wonder why Myers, Coyne, et. al. want to spend their energy attacking someone they might plausibly think of as an ally. One can see the same thing in the evaluation of Miller by Myers and Coyne. Ruse in fact was the object of a sound scolding for his own moral relativism from the Intelligent Design website Uncommon Descent, but this is not enough to make him sympathetic in an Incompatibilist’s eyes. To some of these folks, the enemy of my enemy is often my enemy.
(Well, of course, sometimes he is; I've read a good deal of Myers' past blog posts, and while he's gruff, he's not nearly as unreasonable as some people make him out to be.)
Ruse’s case is interesting because it shows up just how odd it can be to really see things from your opponents' point of view. To take a single example, one which segues with my earlier questions about trust: Myers reacted with deep, snide disgust at an anecdote in which Michael Ruse described going through the “Creation Museum,” run by a guy named Ken Ham in Petersburg, KY. As Andrew Brown reports, Ruse had written:
Just for one moment about half way through the exhibit ...I got that Kuhnian flash that it could all be true – it was only a flash (rather like thinking that Freudianism is true or that the Republicans are right on anything whatsoever) but it was interesting nevertheless to get a sense of how much sense this whole display and paradigm can make to people.
Ruse went on to reflect upon this unexpected bout of empathy:
It is silly just to dismiss this stuff as false – that eating turds is good for you is [also] false but generally people don't want to [whereas] a lot of people believe Creationism so we on the other side need to get a feeling not just for the ideas but for the psychology too.
Now whatever defects Ruse’s approach and stance may have, this sort of attempt to get inside the minds of your opponents is just the kind of meta-scientific experiment I mean when I speak of a philosophical engagement with the question of (In)compatisbilism. It's also the sort of talk that leaves P.Z. Myers nonplussed:
Oh, right. Forget all that stuff about the earth being 6,000 years old, all the diversity of life on earth being packed into a boat for a year, and the adamant belief that atheists, agnostics, and theistic evolutionists are trying to destroy the nation for Satan…we're supposed to feel for them, and try to understand their psychology…. This is what is so awful about the "New Atheists": they are such horrible, insensitive louts. They can't overlook the teeny tiny little demand of biblical literalism to see that creationism isn't quite so wicked…. If only we'd try to see the world through their eyes, we would understand that their beliefs aren't stupid and crazy and wrong. Or something. I'm not quite sure what. I guess we're supposed to sympathize with them, and be less critical.
He then proceeds to count the ways he does talk with, understand, and sympathize with creationists.
I understand that many creationists are intelligent and sane — they share a lot of values with me, like wanting to be able to think as they please, to raise happy, healthy families, and they are very concerned about their children….I do sympathize with them. I feel great sympathy and sorrow for the fact that they've been lied to by deluded con men like Ken Ham, and that they're living lives driven by an irrational fear.
But there are also some for whom I have no sympathy at all.
I have zero sympathy for intelligent people who stand before a grandiose monument to lies, an institution that is anti-scientific, anti-rational, and ultimately anti-human, in a place where children are being actively miseducated, an edifice dedicated to an abiding intellectual evil, and choose to complain about how those ghastly atheists are ruining everything.
Those people can just fuck off.
This admirable rant, and Ruse’s thoughts to which Myers is reacting, are right at the crux of the matter for me. Though I do admire the rant (and I mean it—while with all my heart I disagree with Myers’ hardline Incompatibilism, I cannot but stand a little awestruck by his indefatigable and uncowed commitment to the cause he believes in), Ruse’s curiosity about what makes creationism compelling—despite how crazy he thinks it is—is, finally, exactly as worthwhile to a philosopher, who must be interested in everything, including the pre-reflective biases that dispose us to find one claim or another enticing, believable, worthy of our lives. Need I specify that this interest does not stop at sharing these biases, but includes critique? But how—in what spirit—does one critique?
I am deeply invested in the substance of the argument, and no one will be shocked to learn that I am a Compatibilist myself; but I keep coming back not just to what’s being said in the argument but in how people are saying it. What fascinates me is this boundary between content—the question of religion and science and their compatibility or lack thereof—and the process of the debate, which keeps turning from conversation into dogfight. Why are people so angry at each other? I don’t ask this in some faux-naïve oblivious manner; I get that there’s a substantive disagreement about reality, and that people are taking things personally. My interest in these arguments stems from good old-fashioned philosophical reasons. “What sorts of disagreements lead to hatred and wrath?” Why is it apparently so difficult to keep the conversation within the bounds of civility? Why do the comments swerve so quickly, if not completely away from substantive argument, at least towards liberal inclusion of condescension and insult?
There is something about this debate, all three or four sides (and counting) of it, that clearly we don't understand very well. We're not having the argument. It is having us.
Glancing over at my own blogroll, I noticed the headline at The Existence Machine: “Who Do You Trust?” A question that was on my mind, since I was thinking about science and the position of laypeople with respect to it. Turned out to be more relevant than I would have guessed: a long post by blogger Richard Crary specifically taking off from my observation that contentious matters always turn upon issues of trust. Crary writes:
Consider the following sentence: "You're entitled to your own opinion; you're not entitled to your own facts." … I myself have said much the same thing in political arguments. Of course, it rarely gets me anywhere. And as I've noticed that my arguments rarely get me anywhere (assuming those cases when I've been my most coherent and least defensive, and being as charitable as possible toward my interlocutors; it's not helpful going through life thinking everyone else is an idiot, even when they're wrong), I've often wondered how it is that we come to know and understand things, how it is we become open to certain ways of looking at the world.
Socrates probably noticed the same thing: “By Zeus! Callicles and I did not make much headway today, did we?”
Commenting on my post about the neuroscience of wisdom, dy0genes recently spun out a sort of science-fiction scenario (and there is no reason to presume it is inherently fictional) in which it is plausible to know what sorts of brain states correspond to what sorts of mental processes. Could we then artificially induce or repress these processes? This would be one way of approaching the question of “how we become open to certain ways of looking at the world.” We could program everyone to think “scientifically!” No more relying on revelation, or blind faith, refusing to look at the evidence… we would all be perfectly rational creatures, with valid reasons for thinking what and how we think. Well, except of course, for thinking this way in the first place.
Is there an irreducible difference between causes and reasons? No one, I take it, will contest that if I listen to an argument and change my mind, this is different from changing my mind as a result of undergoing hypnosis. (One of the things that makes psychoanalysis, and indeed psychotherapeutic theory in general, fascinating, is the way it blurs this distinction. Ancient philosophy often reads like a weird blend of such therapy and modern informal logic). If, instead of being hypnotized, I am injected with a drug or and have some precise electrical stimulation to particular regions of my cerebral cortex, the case is if anything even more clear. I have not rationally changed my mind; I have had my mind changed for me.
Pursuing the consequences of this Phildickian premise would take me too far afield (questions about autonomy always involve identity, for instance; e.g., “Am I still the same person?”). But I think it is fair to say that if Socrates had been offered the opportunity to change Callicles’ mind with a syringe, he would have thrown the proffered instrument into the Aegean Sea. (This is not an argument against pursuing a line of research, but a way of drawing more starkly its ramifications).
Remarking on my claim that rival arguments about (for example) “What Really Happened on 9/11” all hinge upon unspoken investments of trust, Crary remarks:
I … have occasionally found myself wandering onto certain websites that purport to present expert testimony on, say, the physics of demolition and realizing that I had no basis for deciding the matter. My concern here, of course, is not 9/11 per se, nor is it [Skholiast’s], but rather this matter of trust. In particular, trust in the context of our highly technocratic capitalist society.
Crary goes on to cite Daniel Hind’s book The Threat to Reason, arguing that the alleged rising tide of irrationalism (superstition, fundamentalism, postmodern mumbo-jumbo) is a minor concern compared to the betrayal of the forces of reason into the hands of the powerful corporate and government interests in its own backyard. Hind, and Crary citing him, pointing out the consequences of being continually beset by claims upon our trust. Institutions of state, education, science, commerce, and religion, all vie for our attention and, yes, our credence. Crary underscores that part of the fallout from this is that one has to make an endless array of judgment calls in who you listen to and—perhaps tentatively—believe. Hind’s suggestion that the legacy of the enlightenment (both its democratic and its scientific trajectories) has been, at least partially, hijacked by power, brings into stark relief the fact that we come by our commitments in large part unconsciously.
Pointing this out is neither the obscurantist ploy that impatient critics of either “conspiracy theory” (that unanswerable insult) or postmodernism say it is; nor is it a gotcha-move that outflanks all the he-said-she-said of contemporary debate, as though pointing out that Rupert Murdoch owns a newspaper meant one could, by that token alone, disregard—or even disbelieve—whatever is printed there. But it does name another set of trajectories to which one must pay heed. In the next post or so I will try to show one way in which these considerations pertain, with regard to the so-called Science Wars.
Whether we acknowledge it or not, a tremendous amount of our worldview has been caused, not decided—at least, not by us. Philosophy aims to both to understand, and to prudently increase, the decision/cause ratio.
Saturday, March 13, 2010
While I gather my words for the next post of philosophy and poetry, here's a link to an article on the neuropsychology of wisdom. I am on record as insisting that any "philosophy" that discards sophia is just so much academic, cocktail-party, or coffeehouse power jockeying; but bearing in mind some recent admonishments about the impossibility of disregarding science, I thought I might attempt to allay suspicions with a short post on the science of mental states.
The article's author, Stephen S. Hall, maintains a website including a blog, where there are some marvelous posts. The article itself is the germ of a book, Wisdom: From Philosophy to Neuroscience. You can also listen to the radio interview Hall gave today. (Note that Hall's interest is science in general, not "just" wisdom; see, e.g., this interview on genetics, the impact of science, and "science writing," among other things. A look at some of his archived articles on his website will reinforce this).
On wisdom, one of the interesting things I note is the cross-cultural consensus on what it entails: among other things, emotional equanimity, a kind of altruism or empathy, and a long view; wisdom, Hall underscores, seems to be future-oriented. (It is no coincidence, I suggest, that this is what Heidegger says about Dasein in its authenticity).
Regarding emotional balance, I am reminded of another famous study, the so-called "Nun Study". A good summary is in Martin Selligman's book Authentic Happiness. The results point to a fairly strong correlation between optimism and longevity. The nuns (an ideal control population, since almost all 'outside
factors' like lifestyle, diet, exercise were eliminated), all wrote, in the 1930s, personal statements, which many decades later, scoured for keywords betokening either optimism or pessimism, yielded surprising results when matched with the actual lifespans of the authors. The study seems to be strong evidence that optimism really does correlate with long life.
However, another study, by Lauren Abramson and Lyn Alloy (& also referenced by Selligman), yields a curious twist when contrasted with the Nun Study. College students were were put into situations in which they had a degree of control over a certain outcome--turning on a green light. Sometimes that control was of degree zero, sometimes it was 100%, sometimes it was in between. Students who were depressed tended to accurately assess how much control they had. Non-depressed students tended always to overstate the degree of control they had-- sometimes believing they were influencing the outcome of whatever the process was when in fact it had nothing to do with what they did.
This is the famous "illusion of control," which plays into everything from gambling to witchcraft; it's also the much-debated phenomenon of "depressive realism."
Put these two studies together and what do you learn? In short, and oversimplifying for effect: if you are a look-on-the-bright-side type, you will be happy and long-lived, but wrong; and if you are a pessimist, you will tend to be accurate, dour, and die young. But the paradox that arises is, how to assess this apparently accurate finding? does it reinforce pessimism? Is it possible to cultivate optimism for the sake of its benefits (apparently accurately assessed) even when you know that pessimism, not optimism, hones (or at least correlates with) accuracy?
It may be possible to be scientific about happiness, and even about wisdom. But it is harder, and more important, to be wise about science.
Thursday, March 11, 2010
I want to register a peculiar feeling I had when first bracing myself to write about poetry online. The feeling was one of defensiveness, or rather protectiveness. I am willing to let my judgments about “philosophers” as traditionally understood stand in the public square, and I can discuss them with passion but also with equanimity. Poetry is not like this for me. I want to say I have “learned more” from poets than from philosophers, but this is inexact; what I have learned, however, is of a deeper register, and I treasure it more, because it feels more constitutive of who I am.
In this poetry is akin to religion, and indeed one of Badiou’s grave criticisms of Heidegger’s “suturing” of philosophy to poetry is that it enables, or at least imagines, a return to a para-theological mode of thinking. Kierkegaard held that the “religious stage,” which he put after the ethical, could easily be taken for a regression to the aesthetic. Indeed, as far back as Plato the poetic and the religious are entwined for philosophy, and as is well known, the earliest art is religious art.
As most readers here will know, Socrates speaks of a “quarrel” between poetry and philosophy, a quarrel already ancient by the time he refers of it, almost two and a half millennia ago. I tend to read this not as creative license on Plato’s part, but as simple report. As long as there has been philosophy, it has striven with—and against—poetry. I have always felt—and I am not claiming any originality here—that this is one of the clues to the meaning of Plato, and via Plato, to that of philosophy per se. I say this in full cognizance of how, um, dated? naïve? silly? too-big notions of “the meaning of…” can seem. And maybe they should seem naïve or silly. But I think Plato meant quite intentionally to cultivate the sense of awe that such phrases give, and not for the cheap reasons of building up his reputation for having some kind of secret wisdom. Rather, the expectation of some payoff, some Beatific Vision, is part of philosophical pedagogy.
“Why did Plato banish the poets?” The cliché answer has always been: poets lie. The gods they speak of—Zeus, Hera, Hephaestus, Artemis—do not exist. Or if they do, if there are indeed gods, they cannot bear any resemblance to the cast of warmongering, capricious lechers and cuckolds in the myths; not if they really merit our worship, not if they are gods. One may note that this critique is alive and well in the all-too-current harangues of and against fundamentalists of all stripes, at least those who stake part of their will-to-power in the literal truth of some sacred text. The God who rains down fire on Sodom, who “tests” Abraham with the request for a human sacrifice, who bets that Satan cannot win Job no matter how many of his family are crushed by falling houses—do we need a Cambridge-educated biologist to make us admit that such a god strains both our credulity and fealty? The poets—whether of Greece, India, Egypt, or Israel—may tell us pleasing stories, or at any rate moving stories, but the objects of those stories are unreal, nonexistent. So goes the stock explanation of why Plato showed the poets the door.
I am going to argue in a way that will seem naïve, conflating the poetry of many ages. For the record, I do not think that there are no differences between how ancient listeners of Homer and modern readers of Frost apprehend(ed) poetry. Indeed, I have given a good deal of thinking to how ancient, medieval, and modern modes of reception diverge, and to how the ancient-medieval mode gave way. But for all that, poets themselves have always tended to view their history as more continuous than broken. Here I am concerned with poetry as experienced by poets, not by sociologically-minded historians or historically-minded sociologists.
With that caveat, I will explore a bit something that I said in my first post: that philosophy “takes the secret paths that go from world to world.” I want now to suggest a little more of what these worlds might mean, via two poems from the early 20th century.
There’s a well-known poem by Auden that sometimes goes by the name “Funeral Blues.” You may have seen it in the film Four Weddings and a Funeral.
Stop all the clocks, cut off the telephone,
Prevent the dog from barking with a juicy bone,
Silence the pianos and with muffled drum
Bring out the coffin, let the mourners come.
Let aeroplanes circle moaning overhead
Scribbling on the sky the message He Is Dead.
Put crêpe bows round the white necks of the public doves,
Let the traffic policemen wear black cotton gloves.
He was my North, my South, my East and West,
My working week and Sunday rest,
My noon, my midnight, my talk, my song;
I thought that love would last forever: I was wrong.
The stars are not wanted now: put out every one;
Pack up the moon and dismantle the sun;
Pour away the ocean and sweep up the wood;
For nothing now can ever come to any good.
Note the slow outward spiral from domestic considerations (put the phone off the hook, keep the dog quiet) to public ones (the aeroplanes are summoned, the policemen and the “public doves” are recruited) to the sudden eruption of personal grief that takes in the whole world: “My north, my south, my east, my west,” a grief that overflows the house and the public square to flatly aver that the cosmos as a whole is now irrevocably without purpose. And yet, the language for this cosmic despair is once more domestic: “Pour away the ocean and sweep up the wood;” the apocalypse reduced to meaningless household chores. The despondent voice is almost frightening in its inconsolability.
What I am interested in (now) with this poem is this absolute character, its refusal and even incomprehension of the very idea of ever "feeling better." When you read this poem and enter into it, it brooks no disagreement. Within the world of the poem, it is simply true: Nothing now can ever come to any good. There is no disputing these lines; no chink for any “chin up, old man—time heals all wounds!” rejoinder to slip through—not while you are reading the poem. This is not because the poem only says its same sixteen lines over and over again; the question concerns not the poem as textual artifact but as lived experience. Nor is not simply because taste or decorum forbids it—after all, there is no actual speaker whose real emotions we need to consider. No, it is because this is a world of grief, and the rejoinder is not tactless but meaningless.
I sometimes find it frankly miraculous that I am capable of looking up from reading this evocation of the laying waste of a life and go about my business. How is this possible?
I don’t offer an explanation, but only a way of speaking, about this question. This is possible for us, I suggest, because we inhabit a different world from that of the speaker; we are able to enter that world, and also to leave it. Upon departure, the house lights come up, we hear the sound of the traffic or the clink of dishes in the café or the ring of the phone; but until then, we are effectively within that story, and obey its laws.
It is quite possible to remain “within” a poem, or any other work of art, well after reading it. Cinema, that pseudo-liturgy of our age, has the most noticeable such effect, but books still can hold sway over us. The claim of a work of scripture is not that it is a story of people two or three millennia ago, but that it is the story we are in now. This is precisely why Kierkegaard could say that the Religious stage was like a recapitulation of the Aesthetic. And it is why philosophy requires us to struggle with both poetry and with religion. For philosophy wants to free us to navigate between such worlds at will—to be really there when we are there, and to always know there are others.
The passing from world to world I will call, after a long line of rhetoreticians, metalepsis. If I claim not to coin this expression it’s because I think I am being faithful to a fundamental continuity of meaning. This isn’t to say I ignore the evolution of the term; but I assert that there’s a coherent development in its use. For Gerard Genette, metalepsis has to do with the passage between narrative levels (say, the nested narratives of the Thousand and One Nights), or even the way a narrative can refer to extra-narrative reality; for instance, the intrusion of the authorial “I” in Kundera’s Unbearable Lightness of Being, acknowledging the purely literary status of his characters. A work of literature can even fictively appropriate the “real world;” consider, for example, the moment in Barrie’s play Peter Pan when the audience is suddenly acknowledged and entreated to demonstrate its belief in fairies by clapping, in order to save Tinkerbell’s life.
For Quintillian, metalepsis (transumptio in Latin) is a kind of slipping from trope to trope; medieval and renaissance rhetoreticians used it to denote an extreme metaphor; George Puttenham mentions it in his Arte of English Poesie (1589) and calls it the “farfet,” as in far-fetched, and mentions that it is always impressive to women.
But go back far enough and you get to Aristotle, for whom metalepsis means “participation.” This is a very charged word. The other term for participation is methexis, and between the two of them these words inform the whole history of western thinking from Plato till Suarez at least. When I use the term participation, I have Aristotle and Plato in mind, but also the scholastics, and Levy-Bruhl. This is a matter for a post of its own. But for now I want to say simply that if participation is both a metaphysical and a literary trope, this is because “literature” is more than a matter of texts; it is a matter of thought.
It will be noted that I’m arguing for a kind of perspectivism, and I don’t want to wrap myself too tightly in Nietzsche’s mantle. But with due reference to him, I would say that we have here a case such as I spoke of before: when you read Auden’s poem, it also reads you. I will end with a different poem, at least as well known, to illustrate the point.
Archaic Torso of Apollo
Rainer Maria Rilke (tr. Stephen Mitchell)
We cannot know his legendary head
with eyes like ripening fruit. And yet his torso
is still suffused with brilliance from inside,
like a lamp, in which his gaze, now turned to low,
gleams in all its power. Otherwise
the curved breast could not dazzle you so, nor could
a smile run through the placid hips and thighs
to that dark center where procreation flared.
Otherwise this stone would seem defaced
beneath the translucent cascade of the shoulders
and would not glisten like a wild beast's fur:
would not, from all the borders of itself,
burst like a star: for here there is no place
that does not see you. You must change your life.
It doesn’t matter how many times I have read this, I feel the final lines like a physical shock:
…denn da ist keine Stelle, die dich nicht sieht. Du mußt dein Leben ändern.
The first few times, I almost jumped.
The poem makes tremendous claims for the power of art, as though there were an animistic force held by the broken piece of sculpture, a force that can compel you to admit the necessity of some radical alteration. The language, steeped in eros, is surprisingly even and balanced for all that. Note the subtlety with which its strange indirect para-syllogisms establish what it assumes, without ever asserting it. Thus: “his torso/is… suffused with brilliance… //[because] …Otherwise/the curved breast could not dazzle you so.” How indirectly it has gained our acquiescence—we are dazzled, before we even knew ourselves to be so. “Otherwise this stone would seem defaced,” meaning, it does not seem defaced. It is in fact unthinkable that it should seem defaced; the wholeness of the work of art suffuses it, holds it, overflows it; it “bursts like a star.” And here again, precisely as with Nietzsche’s abyss, it is not a passive object of our attention, but gazes back.
From its opening declaration of incapacity, “We cannot know,” to it’s final imperative, there is in the world of this poem no demurral possible. A quarrel with the poem can only happen from without it. It is not that one cannot argue with it, cannot set about to reconstruct the legendary head, or cheekily respond “Oh, must I indeed?” to its last five words. But to do this is in a decisive sense not to read the poem. Within its world, it is simply true: you must change your life. What this means is of course different depending on what “your life” is. It is silly to think that reading these lines by Rilke magically brings about long nights of introspection; I am not contending that this poem has a this-worldy effect of this sort, but that within the poem one recognizes the experience of encountering this imperative; and that, within this world, the experience is irrefutable, and gainsaying it, meaningless.
A great deal more could be said about both these poems, and I probably will say a bit of it in the future, but over-commentary, while it cannot kill the poem, discredits the critic.