Future, Present, & Past:



Speculative
~~ Giving itself latitude and leisure to take any premise or inquiry to its furthest associative conclusion.
Critical~~ Ready to apply, to itself and its object, the canons of reason, evidence, style, and ethics, up to their limits.
Traditional~~ At home and at large in the ecosystem of practice and memory that radically nourishes the whole person.

Oυδεὶς άμουσος εἰσίτω

Monday, September 26, 2016

9/26/1983


I have never yet reblogged someone else's post, thought I have come close. This time I am coming very close indeed.

My last post but one commended a "politics above politics." That phrase may sound like a sort of in-the-air theoretical program, even though it specifically points to Aron's description of Cavaillès resistance work against the Occupation. I want to point to a particular instance of what I mean by it. It's a true story (though obviously it has been imagined in the text I'm pointing you to), about what the examined life means in practical terms. It has nothing to do with reading Spinoza, though it certainly has to do with "following necessity," with doing what must be done -- and not doing what must not be. It's an instance that may in fact (though experts argue) be responsible for your being around today.

So, go read this. It was published a year ago today, about a decision made 33 years ago today. If you already know the story, it's still a very beautiful and succinct re-telling. If you don't know it, you should.

Saturday, September 24, 2016

Uses of indignation


This is a somewhat dashed-off response (which outgrew the comment box, whaddya know) to Kawingbird's comment on my previous post. This answer is somewhat scatter-shot, but I hope it addresses the points raised (if not adequately). Go read that comment for things here to make sense.

K. points out that "indignation," about which I cited Alexander Piatigorsky as saying that it must be avoided by a philosopher ("... he can be indignant only with himself"), is a standard translation of the term nemesis that Aristotle uses in the Nicomachean Ethics.

Now, a truly adequate response would need to address several other things. To begin with, Nemesis is also a goddess, and clearly a puzzling one; in pursuit of her, Zeus adopted the form of a swan and at least according to one myth thus fathered Helen. Nemesis is thus confused/conflated with Leda, just as Helen herself winds up confused with a phantom; and of course the whole tragic sage of the Trojan War follows ("...A shudder in the loins engenders there / the broken wall, the burning roof and tower / and Agamemnon dead..." as Yeats puts it.) Not an auspicious guardian deity for politics! And surely this is relevant, given the role anger plays in the Iliad from the first line on.

But n.b. that Piatigorsky does not rule out indignation per se; he insists that it find its proper object. Indignation seems to involve a sort of affront, how-dare-you. It implies (and this is tied into the etymology of "ought") that something else is owed. Is there then a debt that needs paying that is being forfeited or at least forestalled? Compare the fragment of Anaximander:
Whence things have their origin,
Thence also their destruction happens,
According to necessity;
For they give to each other justice and recompense
For their injustice
In conformity with the ordinance of Time.
Now I don't want to run too blithely down the road to political ontology or vice-versa; I just want to register here the overlap between the implicit notion of debt in this context and the way the idea of [in]justice is deployed -- the way the ought and the is seem to interact in Anaximander in a way that is hard for moderns to take (i.e., "deriving "ought" from "is"). But this is also what one sees in someone like Cavailles, for whom Spinozism can be a metaphysical account of the world but clearly also indicates what we must do (and, after all, is unfolded in a book called "Ethics").

The problem however is not with the ought, i.e., with the falling short. This is indeed the meaning of "sin" as well. But the question is, to whom is this owed?

In answer to this question, indignation says: Me!! This is surely one admissible reading of the thumos of Achilles, and the corresponding, answering anger of Agamemnon. This "Me!" sounds through the world in a far different way than Hopkins means when he says:
Each mortal thing does one thing and the same:
Deals out that being indoors each one dwells;
Selves — goes itself;
myself it speaks and spells,
Crying
Whát I dó is me: for that I came.
In Hopkins, the one note each thing sounds is "myself" in the sense of individuation. It's a message out into the world, rather than a demand for something from the world. Moreover, the context of Hopkins is expressly connected to the Aristotelian context K evokes, for the very next line Hopkins goes on to characterize this individuation: "I say more: the just man justices."

Now the indignant ego is not unconnected to the sense of justice, clearly; it is, after all, closely tied to the question of dignity, and one could ring the changes on this word philosophically for a long while: through all its resonance with decorum, praise, worth, worship, all the way to the theological anthropology implicit in the notion of imago dei. In my initial drafts for this post which I've now lost I had some excurses on the way megalopsychia in Aristotle interacts with dignity. This portion of the Ethics starts out almost seeming to promise that the reader has arrived at the hoped-for conclusion of an account of how to live; but by the end, it has gotten almost silly in its depiction of the bearing of the great-souled man (all the way down to his "slow, measured gait.") Contrast this with Socrates, always barefoot in the agora, or Diogenes in his tub!

Now I believe one can leverage a sense of indignation into philosophical insight, but the sucking force of the ego is extremely strong and dangerous. I am oversimplifying, but the ancient consensus seems to be that philosophers should basically eschew politics because it takes up too much time. I think there's a deeper, much more pertinent, reason behind that.

One further point: K characterizes me as saying one should air grievances, but keep a kind of emotional distance. I'm not merely calling for cooler heads prevailing here. I'm saying that naming the emotions is a crucial step in this process; not sufficient, but necessary. (As K writes, "it's always dangerous to think we've outgrown first steps.") I believe that a tremendous amount of ill follows from the fact that so much of our politics is conducted under the guise of bravado. I'm calling, I suppose, for a stance similar in certain ways to that advocated by Malcolm Bull in his Anti-Nietzsche: "read like a loser." I think that stopping being a victim requires first grasping how afraid one is of victimization.

Finally, Kawingbird writes:
I always find myself at this impasse when trying to think through politics. The general seems terribly relevant, the specific... rather philosophically dull.
As to the general and the specific: Yes. the post is torn in two directions, and it flounders as it tries to synthesize. This is one reason I re-wrote the post several times before I finally clicked 'publish' and I am still unsatisfied.

Saturday, September 17, 2016

Politics above politics: beyond the endgame


I have a good friend whose wisdom, depth and acumen have informed my own perspectives in countless ways. We are no longer frequently in touch but for many years we lived in the same city and he was instrumental in nudging me away from a kind of no-name newage syncretism to something grounded and particular, without losing my universalist motives. He has now spent many years working in Africa, going between choked over-populated and poverty-ridden metropolises and rural villages with no wells or electricity. It's partly due to his updates that I know anything at all about the lives of populations (as opposed to individuals) that are truly in want. He tells me he's thinking seriously of relocating to Africa permanently. Why would he want to throw over our well-advertised American "prosperity" for African destitution just as he is beginning to have to take the measure of retirement and eventual old age? Well, he's about ten or fifteen years older than I am, and like many who have spent their lives really thinking rather than "getting ahead," his patchy employment history and itinerant life has left him with scant savings and little to look forward to in terms of any social safety net. But that's not the deciding factor. A bit less than two years ago, he wrote me:
I am getting really scared of America. I think this is Nazi Germany in 1934, so I want to get out, and I want to get my sister and nephew out.
This was in November of 2014. Note that date -- long before the pus of Presidential nomination campaigns had begun to ooze. Since my friend wrote this to me, the number of people comparing a four-times-bankrupt tycoon and T.V. "star" lately come to American politics with a certain figure from 1930's Germany has ballooned until the analogy is a depressingly predictable trope.

Despite myself, I cannot but be distracted by the U.S. Presidential campaign and the attendant great clouds of dust kicked up by the national commentariat. What's a philosopher to say about it all? (That philosophy is especially pertinent to this election is well argued by Amod Lele.) It feels, to be sure, like a time when one must "go on the record;" to name the "insanity of non-thinking" transpiring. This over-the-top phrase is used by the philosopher Alexander Piatigorsky to describe his impression of the world of the 1920s and '30s which ushered in the great dictatorships of the 20th century; and it surely seems apropos to our own moment. But the insanity has a span far wider than what we call politics.

That the current Republican candidate (I will refer to him by DT for "Delirium Tremens"), the buffoon no one took seriously until it was too late, is relatively accurately described by the conceptual penumbra pointed to by the word "narcissist," is, I take it, not worth arguing; it is either apparent to you, or you cannot be persuaded. (I do not believe in "narcissism" as an objective condition that exists in quite the same way as measles, or hunger, or even selfishness, but as a recognizable marker in contemporary clinical and pop psychology, it certainly serves to describe DT). His opponent, riding a wave of entitlement towards an apparent rendezvous with making history, is fulfilling a dream and a plan unofficially scheduled long ago ("eight years of Bill, eight years of Hill..."), despite her censure by the director of the FBI for extreme carelessness with State secrets (and leaving aside other "politically motivated" accusations). You can find the candidates' flaws incommensurable if you want; insist, if you like, that HRC is a standard-issue politician with ordinary drawbacks, and DT a wannabe dictator from whom the nation and the world could not recover. Or you can think that the nation needs defending from Islamism and "business as usual," HRC is a known and rotten quantity and DT a wild card and worth a gamble to "shake things up." You can even hold that we need to press things to get worse before they get better and one of the two is the sure way to worsen. I'm not trying to argue for or against any of these. I agree with some of these more than others but each of them has its appeal, is held to by people I respect (as well as many I don't), and I'm not sure I don't move from one to the other -- yes, any of them -- depending on the day and my mood. But none of this makes the political situation philosophically interesting. The real question is what "politics" counts for.

No doubt the election will have real consequences; among them will be -- already is, I think -- an inflated sense of the consequences of elections, especially as currently run in the United States. No matter what the outcome, it can be reliably predicted that approximately half the voters will think we dodged a bullet, and half will grit their teeth and mutter about a "legal" coup d'etat taking place under our noses. And if that's as deep as it goes, then absolutely no thinking will have transpired at all.

One hears a lot about how polarization is crippling American politics; that the two major parties are so entrenched in ideological disagreements that zero-sum gridlock is the logical conclusion; and the answer here is the praising of bridge-building and finding middle ground. The art of compromise, you know?

I think American politics is not crippled by disagreement; it's crippled by the fact that we do disagreement so extremely poorly. We need not less disagreement, but better disagreement. This has got little to do with any call for "civility" as well, though that would be a good thing. It is only tangentially related to the obvious fact that the "disagreements" of major parties are already mere surface squabbles, that huge swathes of possible political territory are entirely left out of the "conversation." It's more closely related to the fact that aside from whatever machinations happen in smoke-filled rooms (and I do not scoff at scenarios involving smoke-filled rooms), all current styles of political polemic involve each side in fueling the very "opposition" they ostensibly aim to combat.

This isn't about the content of any politics, whether overt (the "platform", talking points, etc.) or covert (the smoke-filled rooms). It's about the formal description of what happens when ideas compete for territory in what we might call "attention-space." When it comes to the mere memetic replication of ideas from one mind to another, disagreement need not be acrimonious, but acrimony is in fact a sort of adaptive mutation for an argument, because acrimony is loud, and loudness draws attention; it's the difference between an isolated guy with a cold, versus a guy with a cold in a stadium selling hot dogs. This is just boring old Dawkins-style memetics (though when Dawkins first formulated the notion, it was pretty innovative)*. If a mind does not think a thought, the thought disappears; but no sane mind is continually or even recurrently occupied with a single thought; so for ideas to continue to be thought, they need to move from mind to mind. (I believe that philosophy differs in certain crucial ways from this general memetic model, but I will let that pass for now.) One way to make thoughts get duplicated from mind to mind is via rational argument, appeal to reasons, evidence, and proof, but a much shorter route -- and therefore frequently exploited -- is to hack the system via emotions; emotional appeal clearly aids in the memetic spreading of ideas and of these emotions, anger is especially useful in helping the ideas get a foothold, because anger changes a the contest -- the competition between ideas for attention-space -- into an argument and arguments get loud.

The louder the argument -- the more people overhear it -- the more chance it has that some of them will be drawn into the argument themselves. You might think this is unlikely: when was the last time you cared about what the neighbors were fighting over? You just wanted them to shut up, right? And that's true; you need a certain degree of susceptibility to the idea qua idea in order for it to cross over to your mind as a thought instead of merely being an emotion. But for arguments that transpire in media -- and they can be stupid arguments over the best order in which to watch the Star Wars films, or important arguments about whose Lives Matter -- the situation is different because of scale. These get lots of exposure, lots of bystanders, lots of people sucked in, which makes them get bigger and bigger, and sometimes in a bewilderingly short time.

Often they also burn out, too. But not always. Sometimes it happens that an argument just goes on and on in apparently endless irreconcilability, the "sides" talking more and more not to each other but to themselves about each other. Anger needs something to be angry at, and in politics this is readily to hand, for as Carl Schmitt famously begins, the essential political distinction is: friend or enemy? Each "side" not only has its own positive platform, it also has a caricature blame-scapegoat of the "other side;" and it's this caricature that sparks the angry response, which then includes a counter-caricature, and so on. Our friends are fundamentally right; whatever disagreements we have with them are family squabbles. Our enemies are fundamentally wrong. Either we or they must triumph. When they by chance say something that is clearly correct, it is because they can't help it, but they never draw the consequences from it to see how wrong they are about everything else. Eventually the argument settles into a kind of uneasy interminable cold war with occasional hot flashes that change no one's mind but always confirm preconceptions.

When this happens, there is a real and relevant sense in which the respective sides are not (despite what they think) in competition at all; they are in a kind of systemic symbiosis. Equilibrium. By their in-tandem tug-of-war they together occupy far more of attention-space than either could by itself.

All the foregoing is essentially sociology, and does not tell us, by itself, whether this situation is merely to be observed, or deplored, or what. I don't pretend that it suffices to comprise a political philosophy, or even to give it an adequate starting-place. It certainly does not mean that the philosopher can shrug off politics as sub-philosophical. It is even arguable that the right way to understand philosophy is as Arvydas Šliogeris says:
[T]o a Greek, philosophy is inseparable from politics ... in the sense that philosophy like politics is an adequate-to-freedom existential realm of gestures of an autonomous individual.
(The Fate of Philosophy p 20 My emphasis.)
We can leave aside the question of the anachronism of "autonomous individual" applied to Greek thinking. Šliogeris' point, I take it, would be to say that both politics and philosophy concern us in our responsibility and insofar as we enter into them adequately our responsibility is engaged. There is in this endeavor absolutely no room for blame and complaint.

How then to disagree better?

One could do worse than to start with what is called in pop-psych circles "emotional intelligence." By all means, be afraid if you are afraid. Be angry if you are angry. Know your emotions, feel them, name them. And do not let them run you. This means having more of them out in the open, and acknowledging if you are not very competent yet at being unreactive when in their grip.

So, then, to name some feelings:

What I see from the Democratic candidate makes me uneasy, suspicious, and impatient, in the way I feel when confronted by someone who wants something from me. I don't give a damn about her "seeming aloof" or whatever; let's just stipulate that a woman in politics is struggling against all kinds of unspoken and unreflected-upon assumptions and that those having to do with emotional presentation are endemic and unfair. Fine. This does not change the fact that HRC is squirrely. I know I am part of a large and media-fueled population when I say this. I would welcome the breaking of the glass ceiling with a thrill of about-fucking-time. It is scandalous that it has taken this long for a major party to nominate a double-X chromosome human being for chief executive. It's doubly so that when they finally get around to doing it, the woman in question is so obviously a token and a tool of power. (For the record, I think her default setting is a largely unexamined basic set of liberal social values which she more or less would attempt to act upon if the cost was not too high; but her politics and political economy is standard-issue neoliberalism and she's clearly willing to compromise and do a great deal to get elected. Her well-known changes in position are often unacknowledged by her and seem far more motivated by convenience than owned as actually principled changes-of-mind.)

What I see from the Republican candidate fills me with dismay. He has the posturing of Mussolini, the bad taste of Franco, and yes, the xenophobia of Hitler -- or at least, he seems to think that acting like he does is a winning proposition, which is possibly even worse. More recent analogues are not hard to find either: the intellectual pretensions of Quadaffi, or Vlad Putin's pose as suave strongman, and DT's declared admiration for the same. I feel, also a kind of disdain, which is probably an alibi for a different kind of fear. DT seems to me the most Žižekian candidate we have ever seen -- designedly outrageous, symptomatically grossly negligent of truth, he is the reflection of the American electorate's Caliban looking back at itself. There is no question but that if DT is elected president, we are indeed looking at a game-changing era in the short term, and that short term could be very, very unpleasant. Complete the picture with stock footage of jackboots coming for you or what-have-you. I do not think it is unreasonable to fear such things. The mischief and misery that can be wrought by bad people in high places is considerable, duh, and DT looks to be a bad person. Those he has fooled or manipulated into thinking otherwise are to be pitied. Those who look at him with a kind of envious projection are just sniffing the apparent alpha-dog's ass.

But just how decisive a binary are we considering? A look backward might give us a bit of perspective. In 1964, Republican candidate Barry Goldwater, a deeply unpopular candidate among his own party, was downright terrifying to Democrats, who saw his anti-communist belligerence as the worst kind of drum-beating. Lyndon Johnson defeated Goldwater in part by virtue of the scare-tactics exemplified by the infamous "Daisy" advertisement, which successfully leveraged the (legitimate) fear of nuclear war into a reaction against Goldwater's "extremism in the defense of liberty is no vice" stance. Johnson (who signed the Civil Rights Act and wanted to be remembered as the architect of the Great Society) went on to entrench the nation in an unwinnable war against Communism in Vietnam, partly on false pretenses of a misleadingly spun "incident" in the Gulf of Tonkin. This should remind us that a vote against the momentarily more bellicose figure does not mean a vote against war.† A vote for HRC (who supported the invasions of Iraq and Afghanistan, and who opines that Edward Snowden is a traitor to his country) does not assure us of peace in our time, or better jobs and higher wages, or a better life for the wretched of the earth. Remember that when we talk about "assuring Obama's legacy," we are also talking about kill lists, drone strikes, and NSA surveillance of American citizens.

I wrote and asked my friend what it was that had made him see parallels between the United States of 2014 and the Germany of eighty years previous. His explanation drove home just how parochial American political reflection usually is:
I saw what Fox News was — and the other outlets were not much different — and certainly I had seen at least since Iraq 1.0 how the media’s main purpose is to cover up what was really going on. The leftist press however showed what was really going on, and living in poverty makes you very sensitive to downward social trends. My connection with Africa and knowing about Darfur and World War 3 in the Congo (10 million dead and counting; not a peep in our press!) impressed upon me the incomprehensible enormity of the crimes we were committing. At some point I read Chalmers Johnson’s Blowback trilogy. And when you looked at the increasing restrictions on freedom of speech, at government secrecy, the militarizing of police, the "war on drugs”, surveillance, surveillance, surveillance.... How could anyone not realize where all this was going?
I don't cite this letter because I assume everyone will agree with it (though I do urge people to read Chalmers Johnson, whose books are still far from passé though the trilogy was all published during the Bush administration), but because it underscores how long are the roots of our current dilemma. The ruin of our politics is deep, and if we want to live politically in a responsible and philosophical way, we must think it through to the beginning -- which means (I am sorry for the newage sound of it, but I cannot find a more succinct formulation) to ourselves. The origins of the bad corner we find ourselves in now, with a choice between a candidate who probably thinks she means well but that the rules do not apply to her, and one who redefines the phrase "loose cannon," are not to be sought in the last year or the last eight, or even the debacle of the Bush II years. They go far back, and they implicate us.

(This, by the way, is why in the event of actual jackboots kicking in doors, or actual round-ups of immigrants, or etc., I will not be splitting the country. Any one who makes a different choice has my respect and good wishes. I'll try to leave a light on for you.)

A system with deep roots also has long branches. No matter who is elected, the future is not pretty. It is possible that one candidate or another can make the situation much worse; but the notion that a magic president can make it all better is ludicrous, in part because there are those for whom it is working very, very well already. The American day has passed. We are indeed rich, and privileged, but the center of political, intellectual, and indeed even scientific life is elsewhere. The USA is a declining Empire, and though it may take some while before this really is driven home, it is a matter of When, not of If. This is not, however, just a bit of "pessimism" about "our country's future," and those who react with the "this-is-part-of-the-problem" knee-jerk are themselves part of the problem.

It is possible that there may be some less-uncomfortable semi-soft landing option available to us as we fly into the mouth of the Long Emergency, but there are many, many potential crash-landing outcomes that are bad indeed. I am all for optimism and I do not sneer even at can-do technological ingenuity, let alone at attempts of people of good will to act freely, kindly, wisely. (Also, I am a teacher, which commits me professionally to a certain unjustified optimism I do not always think is wholly warranted, but which both allows me to do my job and is subjectively rekindled by my job. Working with very young people is good for the soul.) I do anticipate that any workable course will involve the giving up of a great deal of American prestige. Current talk about reparations to African or Native Americans barely touches the surface of what might be called for. The lives to which most Americans, including the majority of the 99%, are accustomed, is premised upon the ruination of planet and people. The standard "left" account is that we can consume a bit less and a bit differently in order to keep consuming. The truth is that our consumption accelerates. I believe that human beings are free and capable of acting rationally, but I am not sanguine about the likelihood of people volunteering for having less, particularly if this implies a sort of losing face.

I emphasize, however, that this cannot be -- for the philosopher -- an occasion for mere despair or even impotent fury. Politics is by nature a short-term game; diplomacy, law-making, overseeing budgets -- these are the matters of today and tomorrow. The founders of constitutions -- the Enlightenment thinkers who imagined cosmopolitan civilization --were actually doing something fairly new and innovative; they were trying to imagine a workable social and governmental organization that would be able to persist beyond the short term, without depending on the assumption that the conditions of the short term would themselves persist. It was sort of hybrid project of philosophy and politics (and much of the critique of modernity found, for instance, in Leo Strauss, but also in Schmitt, is based on the question of whether or not this hybrid is viable). Philosophy itself looks to a far broader horizon still. By that measure, all nations are as grass, and the proudest empire may be a chapter in a book for a while. But this does not mean philosophy does not motivate politics.

In the short run, I am deeply dubious about the impact of elections, even this one; but there are certainly ways in which elections could be organized that would give them a far more realistic and meaningful impact upon longer-term prospects. If there is any single political cause I could urge upon people, it would be the boring-sounding one of election reform; not however as a crusade against "money in politics," as though that were possible (though clearly it could be far better managed than it is). I mean, rather, a fundamental restructuring of the ways votes are counted. There are, as far as I know, no perfect and irreproachable methods for this; there are however many much, much better ways of doing it than are done in the U.S. The chokehold of the two major parties; the stifling of and condescension towards Libertarians, Greens, Independents, and others; the absolute panic that sets in when something like the Tea Party happens in the midst of a major party -- all of this could be radically eased by adopting a sane and just manner of counting votes in participatory democracy which allowed people to indicate who their preferred candidates were in order of preference. It would put a brake on negative campaigning, assure that no one would be elected without genuine majority backing, and eliminate the need for defensive voting and the wails about choosing the lesser of two evils.‡

And, in the likely event that darker times are upon us before reform can transpire?

Jean Cavaillès -- one of Alain Badiou's heroes and favorite examples of the philosophe engagé -- wrote work on set theory, logic, and phenomenology, edited Cantor's letters with Emmy Noether, was strongly engaged by Barth's theology, and was shot in 1944 by the Vichy government for his work with the Resistance. (He had already escaped once; he wrote his last work On Logic and the Theory of Science, as a prisoner.) Badiou likes to cite Georges Canguilhem's description of Cavaillès:
A philosopher-mathematician stuffed with explosives, a man as lucid as he was courageous, a man both resolute and without optimism. If that isn't a hero, what is?
During a meeting with Raymond Aron in London after his first escape, Cavaillès told him:
I am a Spinozist; I believe we must submit to necessity everywhere. The sequences of the mathematicians are necessary; even the stages of mathematical science are necessary. This struggle that we carry out is necessary as well.
Knox Peden, from whose book on Spinozism in twentieth century French thought I quote this anecdote, comments (not wholly approvingly) that in telling this story Aron manages to politicize and depoliticize Cavaillès' thinking in the same stroke. Albeit connecting Cavaillès' heroism to his philosophy, he also holds that this heroism was unconnected to any specific politics -- "be they communist, socialist, or democratic" -- because Cavaillès was, he says, simply acting in accordance with necessity as he saw it. Says Peden:
This is a politics that is logical and pure; in a word, it is above politics.
I agree with Peden in his characterization, but I disagree with his evaluation. I think that Aron is right in a certain sense to praise Cavaillès for a dispassionate politics. Philosophically engaged, politics is bound to look like "a politics above politics." It is. The archetype of this was Socrates' trial. Or, to take a fictional instance:
Why does any martyr cooperate with his judases? ... We see a game beyond the endgame ... As Seneca warned Nero: No matter how many of us you kill, you will never kill your successor. (David Mitchell, Cloud Atlas p 349) §
Vote for who you want -- don't vote for either big party candidate if you can't hold your nose, but don't forget down-ticket issues and vote your conscience. Work for election reform to lessen the need for nose-holding everywhere; work for whatever other immediate and near-term ends you feel called to. This will suffice for politics; that is "the game." But in the end, the odds of the game are not good. And in light of that, philosophy must look to the game beyond the endgame, recalling what philosophy is. The only absolute political responsibility of philosophy is to assure that philosophy remains possible: not to survive, but to live the life worth living, the examined life. And this can only be done of philosophers continue to be philosophers. Politics is especially dangerous for this, not just because of the siren song of power (as supposedly exemplified by Plato's ill-fated "Syracusan adventure," or Heidegger's famous "blunder," as he called it, with Nazism), but because of the much more banal emotion I mentioned before: anger.

Towards the end of his life, after having lived in London since the 1970's, Piatigorsky returned for a visit to Moscow. A film crew followed him around recording his impressions for a documentary. As was inevitable, he was met by a city that had drastically changed since he had left. Bemused by the experience, he said:
I don’t know modern Moscow. Many things that I’ve seen make me sad. Just make me sad – in no case indignant. Generally speaking, you know, a philosopher must avoid being indignant. He can be indignant only with himself.

_______

* For an amusing introduction to the memetics of argument, which covers many of the same points but with cute cartoons, I commend to you this video by the the always-entertaining and often correct CGP Grey.

† I was reminded of some of this history by the recent dramatic production of Daisy at ACT Theatre in Seattle.

‡ The aforementioned CGP Grey has a series of educational videos on this matter, which you really should watch, beginning with this one, and then this.

§ In this novel, the memetic mechanisms referred to above are also part of this meta-game:
Media has flooded Nea So Copros with my Catechisms. Every schoolchild in corpocracy knows my twelve “blasphemies” now.... My ideas have been reproduced a billionfold.
Nietzsche would say the same about Socrates' motivations.

Sunday, July 17, 2016

The limits of "Just give me the arguments."


In a letter to Feuerbach, Marx in 1843 wrote
How cunningly Herr von Schelling enticed the French, first of all the weak, eclectic [Victor] Cousin, then even the gifted Leroux. For Pierre Leroux and his like still regard Schelling as the man who replaced transcendental idealism by rational realism, abstract thought by thought with flesh and blood, specialised philosophy by world philosophy! To the French romantics and mystics he cries: "I, the union of philosophy and theology," to the French materialists: "I, the union of flesh and idea," to the French skeptics: "I, the destroyer of dogmatism," in a word, "I ... Schelling!"

Schelling has not only been able to unite philosophy and theology, but philosophy and diplomacy too. He has turned philosophy into a general diplomatic science, into a diplomacy for all occasions. Thus an attack on Schelling is indirectly an attack on our entire policy, and especially on Prussian policy. Schelling's philosophy is Prussian policy
sub specie philosophiae.
When you first read this, you'd swear it was something from Nietzsche's nachlass. Suddenly you realize that all that shrewd poison and psychologizing was more or less just in the air in the latter half of the 19th century. This was one of the accepted and expected ways to think, and if you were an intellectual in that milieu, you picked it up and got good at it. (Of course, not everyone got as good at it as Nietzsche.)

This is one of the nuances (and no less crucial for being a nuance) that is lost when you read philosophers of the past out of context, as documents in a line of "arguments" about such-and-such a subject.

This could easily be mis-read as an argument for incorrigible historicism, or at the least that one requires a host of footnotes or some kind of sensitive antennae in order to "really read" thinkers of the past. A certain kind of impatient student is rightly suspicious of such implications. "Why should I care about the "historical context," they want to know. "Just give me the arguments." There is a certain prima facie plausibility to this objection, for surely, as Aquinas argued (and I have cited it before), "Philosophy does not consist in asking what men have said, but in asking after the truth of the matter." This argument looks like it is saying, "cut to the chase," or, more generously, it doesn't matter when and who said it and in what set of cultural assumptions, what matters is, is it true, or slightly more broadly, is it a good argument?

But when you read a thinker -- and it needn't be someone shelved in the philosophy section of the library -- you encounter not a set of arguments alone, but a person, a mind deploying arguments of diverse kinds, yes, but also other things than arguments: description, rhythm, trope; and always within a language and a context, an already-underway conversation.

To be aware of this is not (or ought not to be) to guard access to competence behind doorkeepers of learning. It is not that "you can't understand Kant without understanding the world he lived in," such that competence in the moves of the first critique would require an armature of previous certifications in Wolff and Leibniz, in Konigsbergian and Prussian politics, in Lutheranism and German typography. It's not even -- though this is truer -- that a grasp, or an appreciation, or an awareness, of any of these adds a dimension to one's feeling for Kant that may or may not be relevant to seeing what he's doing with the synthetic a priori. Talk like that tends to be intimidating, implying that to win the right to an opinion requires fighting ones way through a thicket of prerequisites. But dispel the shadow of the law from all of this, and you find instead the visage of the person. A thinker is a person to know, not a table of positions; they are a style, a stance, and indeed a career. No one knows a person out of context.

Collingwood insists that historical understanding consists in the re-enactment of thought. That re-enactment can't happen unless you are oriented, to some degree, in the same landscape of problems and accepted moves that the thinker inhabited.

The context does not explain, let alone explain away; it does not mean you shrug off Nietzsche's psychologistic readings of whoever, just because you've seen Marx do the trick before. But when you are confronted by the (let us admit) rather overwhelming personalities of "the great philosophers," it can help to see them in their element. Not to "take them down a notch," but to let us hear them better.

Wednesday, June 29, 2016

My bet, and Nietzsche's: What "Europe wants," Europe gets... Eventually


Don't usually post about politics here, and I think only once before about the E.U., but the recent vote in the UK and this as-always intelligent set of reflections (and follow-up) from The Poseidonian reminded me that I've been having a quote from Beyond Good and Evil knocking around in my head for the past week.

Poseidonian reminds us that there is a down-side to lots of little countries, and that one of those is war:
What, after all, characterized Europe for 1500 years before the EU? If you’re a Nietzschean a return to endless war would have its upside I suppose.
Yes, but if you are a Nietzschean, you believe (insofar as Nietzscheans "believe" things) that the EU is the future, whatever the growing pains of the present:
Thanks to the morbid estrangement which the lunacy of nationality has produced and continues to produce between the peoples of Europe, thanks likewise to the shortsighted and hasty-handed politicians who are with its aid on top today and have not the slightest notion to what extent the politics of disintegration they pursue must necessarily be only an interlude — thanks to all this, and to much else that is altogether unmentionable today, the most unambiguous signs are now being overlooked, or arbitrarily and lyingly misinterpreted, which declare that Europe wants to become one. In all the more profound and comprehensive men of this century the general tendency of the mysterious workings of their souls has really been to prepare the way to this new synthesis and to anticipate experimentally the European of the future: only in their foregrounds, or in hours of weakness, in old age perhaps, were they among the "men of the fatherland" — they were only taking a rest from themselves when they became "patriots." (BGE 256)
The Poseidonian goes on to (albeit unintentionally) describe, well, me:
If you are a real Marxist, then you will presumably agree with me that the EU is ultimately both an effect of and an instrument of capitalism and American power, and as a result anything that weakens it would be a good thing. I respect you, noble adversary! Not only are you honest, but you see certain things more clearly than your kumbaya-singing brothers and sisters who think that the EU is an effect of, and an instrument of niceness. Conversely, if you are a Christian pacifist, and you think that pragmatic compromise with power to enhance freedom is the the Devil’s way, and that the only right thing to do in the face of power is to surrender completely… and hope that at the end of history God will set it all to rights, I also respect you.
Yes, that's me both times, at least on good days, so I might (on the good days) claim a double portion of respect. But alas, I don't really hold to the pragmatic upshot. I am a localist, and I have no problem with the notion of "Italy for the Italians" or whatever (though I hope, too, for a hospitable localism); but I think Nietzsche was right about the direction of world history in this case. Yes, I do think the EU is a creature of Capital -- transparently so, in fact -- but my (Christian) resistance is far more passive; and in any case, I think the general case "for" Britain's exit from the E.U. just smells like Nietzsche describes it: arbitrary and lyingly misinterpreting. There are doubtless many for whom this was not the case, who had principled motives for voting to leave. Some are likely those that the Poseidonian attributes to the "real" Marxist or the pacifist Christian.

For the record, I suspect that there are many, many more growing pains to be gone through before we get a real US of E. But unless those pains just kill us (which I guess they could), that's what we will get in the long run.

Tuesday, May 24, 2016

Wanting truth: both ways


An article by Galen Strawson in The NY Times' philosophy column maintained that
the hard problem is not what consciousness is, it’s what matter is — what the physical is.
I read this recently in the context of a philosophy discussion group about the philosophy of mind (we are reading John Searle at the moment). Strawson's remark reminded me that in an earlier conversation, one friend had opined, in the midst of a heated back-and-forth about just where consciousness "arises" in the complex web from neuro to social, "I don't see consciousness as a problem." This remark occasioned a total break in conversation for a space of some seconds, until something like an uncomfortable "ohhh-kay, then" broke the silence.

The anecdote nicely illustrates, I think, something about starting points -- not our axioms, but our situated values, with which we embark into enquiry.

No one in our discussion is an out-and-out eliminativist, alas; if they were, I might be able to understand better. As it is, my attempts to understand eliminativism founder, but here's what I've got so far.

I think eliminativism arises from a kind of fixation on the deflationary experience of explanation, a deflationism run riot. There is often something about an really convincing explanation that makes us mildly disappointed; an "Oh, so that was all." (Cue the song "Is that all there is?") One of the tricks (the good tricks) of the scientific urge is to turn this deflation into an asset, to get us excited about that experience, the rush of delimiting the problem. At least in some cases, this deflationary stance in turn breeds reactionaries whose rallying cry is "But wait, there must be Something More..." It is easy to make a caricature of these latter as chasers after the MacGuffin. ("No such thing!") But the main point isn't about whether there is, or is not, Something More. It's about the animus that wants to pursue it without ever catching it, like King Pellinor after the Questing Beast; or conversely, the animus that wants to track down every last Something More until the very idea of Something More is eradicated -- because that's what it's after, really. One stance gets off on a high which is the glimpse of something ever-slipping beyond the horizon; the other gets off on the deflationary feeling of "That's all it ever was," which it parleys into a rush of "Ha, Got you!" (like when you finally thump the glass down trapping the running spider). (If you just step on them, you will have to substitute your own metaphor.)

Sometimes one wants very badly to cut through the issue of the satisfaction a style of thought is after, and instead to ask and to know, Which One of These (deflationists or something-more-ists) is "right"? This is obviously a clumsy formulation in this case -- there are far too many complicating questions -- but we still have an intuition about the bifurcation true/false, that only one of these terms can pertain. I however do not think the issue ("which is right?") can be separated from the question of the drive that motivates them. I know that I risk in this a sort of psychologistic reduction of philosophy, or even Bulverism (this was C.S. Lewis' joke-name (after its fictitious founder) for a dismissive attitude to any argument, dispensing with the reasons for the argument by "explaining" them instead, in radically ad hominem terms). Although I am (in modern, shorthand terms) a Kantian as regards faith and reason, I emphatically reject any psychologism that reduces truth to the function of "our sort of mind" (whether that "sort" is simian or bourgeois or postmodern or whatever); but I still deny that we have any unmediated access to a pure algebra of rationales hanging suspended in the Space of (Pure) Reasons.

Fichte famously said that philosophy was a function of temperament:
what philosophy a man chooses depends entirely upon what kind of man he is; for a philosophical system is not a piece of dead household furniture, which you may use or not use, but is animated by the soul of the man who has it.(Science of Knowledge, Introduction, sec. V)
Philosophers often recoil from this notion--as if it were offered as a kind of excuse, or a blanket explanation like phrenology. Taken as such it is clearly unacceptable, since it would derail the very idea of dialogue. But there is something to this notion of philosophical temperament, nonetheless. It's very hard to get a handle on, especially in one's own case, and most thinkers prefer to ignore the question entirely, or at best to marginalize it into the human-interest part of philosophy.

One who doesn't ignore it is Thomas Nagel. Here is a lovely and forthright passage from The Last Word:
In speaking of the fear of religion, I don’t mean to refer to the entirely reasonable hostility toward certain established religions and religious institutions, in virtue of their objectionable moral doctrines, social policies, and political influence. Nor am I referring to the association of many religious beliefs with superstition and the acceptance of evident empirical falsehoods. I am talking about something much deeper–namely, the fear of religion itself. I speak from experience, being strongly subject to this fear myself: I want atheism to be true and am made uneasy by the fact that some of the most intelligent and well-informed people I know are religious believers. It isn’t just that I don’t believe in God and, naturally, hope that I’m right in my belief. It’s that I hope there is no God! I don’t want there to be a God; I don’t want the universe to be like that. (p 130)
It is extremely difficult to entwine these approaches, to want truth and to want truth; to desire it both qua truth, and qua desirable. Philosophy is not accomplished by impossibly cutting oneself off from context, including the context within oneself. (Though it feels like that sometimes, and includes any number of impossible efforts.) No matter what sort of explanation appeals to you, ask yourself: why does it appeal? What is this "appeal"? Think enough about this and you will find yourself staring at the question of the nature of the Good.

Friday, April 29, 2016

Publishing. And not.


When Hilary Putnam died recently, a common theme in the many tributes offered was his well-known willingness to change his mind. I can no longer track down the remark and so cannot quote it exactly, but I remember some place where he spoke frankly of these about-faces, strongly commending the capacity to say things in the form: "I once thought.... I now think....".

This came back to me recently while reading R.G. Collingwood's Autobiography. Collingwood is reminiscing about of John Cook Wilson, a professor of logic at Oxford:
"I rewrite, on average, one third of my logic lectures every year," said he. "That means I'm constantly changing my mind about every point in the subject. If I published, every book I wrote would betray a change of mind since the last."
For Putnam, it simply meant he was willing to conduct his shifts of position in public. For Cook Wilson, it was (according to Collingwood) a reason not to publish at all. (In fact, Cook Wilson did publish -- mainly articles, many of which were collected in the posthumous Statement and Inference in 1926). There are others who fit these profiles: Socrates in the agora, (supposedly) constantly changing his position, is a model of continual public revision; Wittgenstein pruning and re-arranging the Investigations until there was no realistic chance of making them public in his lifetime is on the other extreme. But most thinkers fall somewhere between.

Collingwood remarks that
I already knew that there are two reasons why people refrain from writing books; either they are conscious that they have nothing to say, or they are conscious that they are unable to say it.
For almost the first two decades after I came of age intellectually, I was haunted by the first reason. But I begin to wonder if it won't be the second that really gets me.

Tuesday, March 22, 2016

A Moving Picture of Eternity


The liturgical year is a complex matrix of many interlocking cycles. The weekly cycle from Sunday to Sunday turns within two slower processions of observances that move through the year. One cycle is a set of fixed celebrations with set calendar dates (Christmas, for instance, always on December 25; Epiphany, always January 6; various Saints' days, each assigned a calendar date usually associated with their death, or sometimes the transfer or relics or some other event). The other is the so-called "moveable feasts," which occur on different dates in different years. Easter is the most obvious of these, and many moveable feasts have their center of gravity at Easter and move forward or backward through the calendar based on when Easter falls. Thus, Pentecost comes fifty days after Easter, and Clean Monday comes 48 days before it. Obviously, moveable observances (not always feasts, as will be apparent in a moment) will pass nearer or farther from various fixed observances in various years.

2016 brings a rare occurrence this coming Friday -- the coinciding of two very solemn observances, one fixed, one moveable: the Feast of the Annunciation, and Good Friday. The Annunciation -- the day the archangel Gabriel announced to the Blessed Virgin Mary that she would conceive the messiah, and she responded "Be it unto me according to your word," is traditionally fixed on March the 25th. Good Friday is of course the day of the crucifixion; and not a feast but a fast. This conjunction of these two observances is very infrequent. It happened in 2005, but before that it happened last in 1932, before either of my parents were born; it won't happen again this century.

Whenever the Annunciation falls during Holy Week, the practice in the Roman rite of late has been to transfer it to the first unimpeded day after the Easter Octave. (The same happens with the feast of St Joseph, March 19.) The Anglican and Lutheran churches follow suit these days, but of course it was not always so, and it was not so in 1608, when the dean of St Paul's cathedral wrote his poem
Upon the Annunciation and Passion Falling upon One Day
1608

Tamely, frail body, abstain today; today
My soul eats twice, Christ hither and away.
She sees Him man, so like God made in this,
That of them both a circle emblem is,
Whose first and last concur; this doubtful day
Of feast or fast, Christ came and went away;
She sees Him nothing twice at once, who’s all;
She sees a Cedar plant itself and fall,
Her Maker put to making, and the head
Of life at once not yet alive yet dead;
She sees at once the virgin mother stay
Reclused at home, public at Golgotha;
Sad and rejoiced she’s seen at once, and seen
At almost fifty and at scarce fifteen;
At once a Son is promised her, and gone;
Gabriel gives Christ to her, He her to John;
Not fully a mother, she’s in orbity,
At once receiver and the legacy;
All this, and all between, this day hath shown,
The abridgement of Christ’s story, which makes one
(As in plain maps, the furthest west is east)
Of the Angels’ Ave and Consummatum est.
How well the Church, God’s court of faculties,
Deals in some times and seldom joining these!
As by the self-fixed Pole we never do
Direct our course, but the next star thereto,
Which shows where the other is and which we say
(Because it strays not far) doth never stray,
So God by His Church, nearest to Him, we know
And stand firm, if we by her motion go;
His Spirit, as His fiery pillar doth
Lead, and His Church, as cloud, to one end both.
This Church, by letting these days join, hath shown
Death and conception in mankind is one:
Or ‘twas in Him the same humility
That He would be a man and leave to be:
Or as creation He had made, as God,
With the last judgment but one period,
His imitating Spouse would join in one
Manhood’s extremes: He shall come, He is gone:
Or as though the least of His pains, deeds, or words,
Would busy a life, she all this day affords;
This treasure then, in gross, my soul uplay,
And in my life retail it every day.


– John Donne
(spelling modernized)
One of the reasons for transferring the Annunciation, of course, is that it is a feast, and the prototype of all feasts is the Eucharist, which is not celebrated on Good Friday (but rather distributed from the sacrament reserved from Maundy Thursday, the night before), and is not eaten at all on Holy Saturday. But in the Byzantine rite, the Annunciation is not transferred, and if it should coincide with either Good Friday or Holy Saturday, the Eucharist is celebrated nonetheless, and in the various hourly prayers things also get rather complicated. Such practices may strike one as an instance of the liturgical impulse to complexity and piling rules upon rules -- the sort of thing Agamben refers to in his critique, in The Highest Poverty, of the drive to make all of life into a liturgy. Or it may look like just another relic of what happened before we were clever enough to use the metric system. Wouldn't it all be so much simpler to just use one calendar instead of this bizarre and inconsistent mesh of solar and lunar approximations inherited from over a spread of 4,000 years? But it is neither a symptom of some delight in minutiae, nor a hangover from uneducated sun worship. There was along-standing tradition that the crucifixion had transpired on this date. Tertullian, or whoever wrote the Adversus Judaeos attributed to him, writes:
And the suffering of this "extermination" was perfected within the times of the seventy hebdomads, under Tiberius Caesar, in the consulate of Rubellius Geminus and Fufius Geminus, in the month of March, at the times of the Passover, on the eighth day before the calends [i.e., the 1st] of April, on the first day of unleavened bread, on which they slew the lamb at even, just as had been enjoined by Moses.
Probably, though, the simplest demonstration that the crucifixion is traditionally dated on the 25th of March is that this is the date of the commemoration of St. Dismas, the penitent thief, who was crucified alongside Jesus and asked Him, "Remember me when you come into your kingdom." As for the Annunciation, of course it is assigned to March 25. The date follows with perfect logic from the date of Christmas. Just do the math.

By other calculations, March 25 was also traditionally determined to be the date of Adam’s creation and Fall. I did a little digging and found bit of Medieval Latin poetry here, which begins:
Salva festa dies, quae vulnera nostra coerces,
Angelus est missus, est passus et in cruce Christus,
Est Adam factus, et eodem tempore lapsus.
"Sacred festival day that heals our wounds, on which the Angel is sent, Christ suffers and is crucified, Adam is made and on the same falls," is my stammering (possibly wrong) rendering. The poem goes on to mention a number of other important events which all just happened to occur on this date: the slaying of Abel, the blessing of Melchizedek (on Abram), the sacrifice of Isaac, the beheading of John the Baptist, the rescue of Peter and the slaying of James under Herod (as per Acts 12). I'm unsure whether the author here is drawing on earlier tradition in every case or just piling up events on his own initiative.

Incidentally, and I'll stake three pints that it was not by accident, Tolkien also made the New Year begin on March 25, after the War of the Ring:
'Noon?' said Sam, trying to calculate. 'Noon of what day?'

'The fourteenth of the New Year,' said Gandalf; 'or if you like, the eighth day of April in the Shire reckoning. But in Gondor the New Year will always now begin upon the twenty-fifth of March when Sauron fell, and when you were brought out of the fire to the King.
(It was also the first day of the year in England until 1750)

OK, all very interesting, you may say (at lest to be polite), but... well, so what?

A liturgical calendar is not just a set of rules telling you what to do. It's a body of worship you move through in time; a set of corporate spiritual exercises, undertaken together. It is stable, but fluid. It has a pattern, but it is always shifting. If you enter into it intentionally, it can be a gentle and ongoing curriculum of ever-deepening prayer. Scriptures, and collects, and commemorations are paired with each other one year and then move away from each other, in a gesture like the circling of heavenly bodies or the gradual dance of one of Calder's mobiles. There is no single, memorize-this lesson to it; the constellation is always in motion, and this is a good thing. It's a way of reading the Bible in dialogue with itself, with the ongoing tradition of which it is a part, and with the whole community of the faithful, out of which moments of realization can emerge, sometimes slowly dawning on you, sometimes flashing out in startling clarity. The coincidence of the Annunciation and the Passion is obviously a very resonant and potent alignment, fraught with theological and symbolic associations. You may appreciate some of those when you consider them in the abstract; but it is a different matter to pray them, as part of the discipline you and your whole community has undertaken. Donne saw some of this, and sensed more, and wrote what he could (which is a good deal more than I could have). The poem is itself worth meditating upon a good deal, and I'll not try to unpack it here. But I want to point out the way the last line opens upon a matter of real import, moving beyond its own occasion. Donne is writing about a single day in the year, and indeed an exceptional day, a conjunction wonderful and rare. The first line underscores this marvel by repeating (and ending with) the word "today." And yet, he concludes:
This treasure then, in gross, my soul uplay,
And in my life retail it every day.
Every day. The correspondences of the liturgical calendar are not magical. There is no special power in the day the 25th of March by virtue of the square of five or the proximity to the Spring Equinox or anything else. The calendar -- like every set of spiritual practice -- does reveal something, and without entering into it deeply, you can and will miss much; but what it shows is what is always there.

Saturday, March 12, 2016

Philosophy Departments, Nihilism, Psychic Research, and the Continental / Analytic Divide. And a few other things you didn't think were related. Part ii


Last post I ended with a question: could the "new psychology" -- the empirical psychology being forged into a science in the late 19th century and whose departmentalization in universities triggered the responding departmentalization of philosophy -- have attained its final, deadly form in the rise of the neurosciences? When RS Bakker argues that logic is neurons and phenomenology is neurons and that any attempt at higher-level neuronic reflection on these patterns is an illusion, is this anything but a final, scoffing fulfillment of the anxieties Reed claims motivated the founding of philosophy departments in the first place?

This would be the triumph of just what, in the previous post, we saw Socrates lampoon in the Phaedo. "The Good" will have been exhaustively described as "sinews and bones," or neurons and glia, "arranged 'good'-wise." If we go all the way with Bakker, this simply means that philosophy is, and has always been -- not just since 1901 or 1879 -- an enormous act of denialism. But we need not go all this way. In my last post but one, I claimed strongly that it is untrue "that our circumstance has somehow encountered a game-changer in science:"
The void was not discovered by science ...I absolutely deny that our dilemma is in any decisive sense completely new and without precedent. Nihilism has always been "at the door," to those who had the skin to sense the chill. If there was ever a real response to it, there remains one now.
Then I remembered that Brassier had anticipated this argument:
‘Nihilism’ in its broadest sense, understood as the predicament in which human life and existence more generally are condemned as ‘meaningless’ (i.e. ‘purposeless’), certainly predates the development of modern science (think of Ecclesiastes). But the emergence of modern science lends it a cognitive import it did not previously enjoy, because where pre-modern nihilism was a consequence of a failure of understanding – “We cannot understand God, therefore there is no meaning available to creatures of limited understanding such as we” – modern nihilism follows from its unprecedented success – “We understand nature better than we did, but this understanding no longer requires the postulate of an underlying meaning”. What has happened in this shift is that intelligibility has become detached from meaning: with modern science, conceptual rationality weans itself from the narrative structures that continue to prevail in theology and theologically inflected metaphysics. This marks a decisive step forward in the slow process through which human rationality has gradually abandoned mythology, which is basically the interpretation of reality in narrative terms. The world has no author and there is no story enciphered in the structure of reality. No narrative is unfolding in nature, certainly not the traditional monotheistic narrative in which the human drama of sin and redemption occupied centre stage, and humanity was a mirror for God.
Quite a bit ago, I remarked (following up on a comment by my friend dy0genes) on Nihilism as a sort of ghost story, and I have since had many occasions to point out the magnetism between nihilism and the literary genre of horror. And what makes a ghost story more compelling than the preface: this story is true ?

Brassier holds that science gives us reasons for nihilism: rather than finding its motive in human insufficiency ("Even if one were to by chance hit upon the truth," as one of the preSocratics glumly put it, "one would not know"), contemporary, science-powered nihilism has a robust case for itself grounded in human capacity.

I have no knock-down response to this. But I do disagree with the way Brassier reads the motive for nihilism past. His claim that this was ignorance about God is especially striking. This ignorance has never been in question and remains axiomatic, for the infinite God and the finite (or even differently infinite) mortal human intellect are by definition incommensurable; but the strong voices in the tradition were always those who, while thinking this total incommensurability through, nevertheless found this an insufficient reason for despair, since God could and did reach out from the other side of the unbridgeable ontological gulf. The prophet Isaiah, St Gregory of Nyssa, Ibn Arabi, Simone Weil -- all are insistent that we do not reach God via some rope-ladder of syllogisms, let alone by rolling balls and inclined planes; but they know from experience that God breaks in upon the human condition nonetheless. Now if this assertion about "experience" sends your eyes rolling, there is nothing I can do about it. No argument is stronger than rolled eyes; but that is because rolling your eyes prescinds from argument.

Comes the rejoinder: "Bah!! It's your invocation of some unverifiable "experience" that prescinds from argument!" But everything hinges on what counts as "verifiable" here. I am not going to rehash the whole "everyone-takes-some-things-on-faith" line, not because I don't think it's true or relevant (it is both) but because it usually doesn't work as a front-line tactic. I'm going to argue, rather, that what counts as evidence is heavily skewed in the official courts.

Jeffrey Kripal argues that you don't need God to blast open the doors of perception; simply taking seriously a precognitive dream would be sufficient to demonstrate that something besides the exchange of sodium and potassium ions is going on. The article in which Kripal made this suggestion (in the Chronicle of Higher Education) drew an intensely dismissive eye-rolling screed by Jerry Coyne in response. "What about all the false predictions?" Coyne asked, as if no one had ever thought of that. But the point is not that we don't make mistakes all the time, and far more mistakes than "direct hits." It's that when some curious event like a precognitive dream or a powerful synchronicity comes upon you, you experience this as meaningful and while you can rationalize your experience away, this comes at a very high cost. In short (say I), such experiences are striking (but by no means the only) counter-examples to Brassier's claim that "intelligibility has become detached from meaning" -- but they also force a revision of just what is meant by "intelligibility."

Amod Lele first pointed me to this thread of Kripal's research in a comment on this post, and I have since read a good deal of him and find a lot to applaud and somewhat to critique; that will come in some later post, perhaps. Here I want only to underscore Kripal's claim that when weird things happen, they count and should not be brushed aside or left unacknowledged.
A paranormal event is one in which the world “out there” and the world “in here” manifest themselves as the same world. It is as if the mental and material dimensions of our experience have “split off” the same deeper Ground or One World. The physical world now begins to behave like a story or a series of signs. Hence the common descriptors of these experiences: “It was as if I were a character in a novel” or “It was as if I were caught in a movie.” These sensibilities, I suggest, are very accurate perceptions, because, of course, we all are caught in novels and movies, which we call culture and religion. A paranormal moment is one in which we realize that this is so. What we then do with this realization is up to us. (Kripal interview)
Elsewhere, remarking on this same self-description (‘it was as if I was a character in a novel’, or ‘it was as if I was inside a movie’) Kripal expands on how and why he thinks people are right to describe themselves thus:
I think they are. I think we are too, right now. We’re written, composed by our ancestors.
What shocked me was how many textual allusions people would naturally use to describe a paranormal event. They would talk about puns, jokes, allusions, readings or messages. It’s a textual process going on in the physical environment.

A paranormal event becomes an invitation to re-write the script. That could be on a personal level, or a cultural level for writers and film-makers. Take writers like Philip K. Dick or Whitley Strieber – these are people who create fantasy for a living. They know their spiritual experiences are fantastic. They know they’re being mediated by their imagination. They’re not taking them literally. And yet they would insist that something very real is coming through.
(interview)
This claim of Kripal's that in such moments, "The physical world now begins to behave like a story or a series of signs," is very far-reaching, and it directly conflicts with Brassier's contention above that "no narrative is unfolding in nature," though it does leave a bit vague the precise contours of the referent "nature" and what the character of this "unfolding" would be. It certainly does not demand that there be one "narrative." For myself, of course, I have already said more than once that if there is any sense to the notion of apprehending a "story" to the context-of-all-contexts, it will be matter not of plot but of theme. This, I take it, is crucial to figuring out how best to get out of the stupid zero-sum game of the analytic/continental divide, which, as I mentioned before, Harman (following Brentano) sees as roughly mirroring the split between the sciences and the arts: slow, collaborative progress vs. the cult of the genius. If the danger is that the Analytic side, in its adulation of science, will simply capitulate to scientism, the answer certainly cannot be to merely laud the artistic model of the Continental side; for the same rot that has corroded science has (differently, but in the same process and perhaps to worse effect) deeply corrupted art. It would take us even further afield in the loop-the-loop paths these two posts are "following" to pursue the matter, but the same arguments about the pernicious effect of academia have been going on for even longer with regards to poetry and fiction and the visual arts, and perhaps music above all. Attackers of MFA programs complain, sometimes in so many words: you can teach craft, to some extent, but you can't teach vision, and sometimes the teaching ruins the vision that was there. Defenders shrug: American poetry is vibrant, diverse, and thriving, and most of the critics teach at MFA programs, so what gives? The debate in music gets less airplay but is just as serious and just as deep-seated. When, the accusation goes, was the last time a great work of music came out of the academy, anything moving enough, or even catchy enough, to compare with Cole Porter, Stephen Sondheim, Lennon and McCartney, Joni Mitchell, Prince, Radiohead? This sounds anti-elitist, because we think of pop music as being, well, populist in some sense; but it turns out to be just as much a function of the "cult of the genius" as any worship of Haydn or Mahler. It's not a myth we really want to say we believe in anymore, but we do not know what to replace it with.

What would a philosophy look like, though, that rejected the binary of "art" vs "science"? I am unsure, but Kripal provides us with another hint. At the very same time that the psychology and philosophy departments were differentiating themselves from each other, one towering figure was conducting intense research into just such strange phenomena as Kripal insists we ought to "put back on the table." That figure was William James, who straddles philosophy and psychology and is by any account one of the most important American thinkers in both; but whose tireless investigation into what we would call paranormal phenomena has been systematically marginalized by the unofficial archive-keepers, Kripal contends.
James was a also very active psychical researcher... I spent 20 years studying mysticism and never really thought about the paranormal or psychical phenomena.... We all read James, and we all taked about his pragmatism or his comparativism, but nobody ever talks about his psychical research. That really was a revelation to me, and I wanted to address the question of why we’ve taken that William James off the table.... Which William James are we talking about here? (about 5-6 min in.)
Kripal is overt about his resistance to scientism. For my part, while I do not consider scientism the same thing as nihilism (for one thing, it's often shallower), neither do I consider it the same thing as science. I would hope this need not be said, but as Coyne's misreading of Kripal makes clear, it does. I consider the scientific drive for truth, for as accurate as possible an account of the universe, to be a non-negotiable part of the philosophical spirit; the critique of superstition and of fear of thought must be cultivated intentionally (and it is not easy -- for thinking can indeed be frightening, and anyone who sneers at this hasn't done enough thinking yet to know.) I consider Scientism, on the other hand -- which I will gloss here as an inflation of methodological naturalism into an ontological presupposition, with attendant casualties elsewhere, including science, and in ethics above all -- as one of the main alibis of nihilism today. Thus it is, not without some mild amusement, that I find myself lending tentative credence to the case that a really good ("true"?) ghost story could be an antidote to nihilism.

Edward Reed notes that
The projects on which James expended the most labor -- his stream of consciousness psychology, his studies of psychic powers, his analysis of religious experience and conversion -- have never been taken up seriously by those who claim to be his heirs in philosophy and psychology throughout the United States. (p 201)
This is somewhat overstated if one includes disciplines like religious studies, but as regards the second item on Reed's list, it surely stands. I don't have a well-thought-out explanation of this neglect, except to suggest that it stems from an obvious consensus-reality discomfort with such phenomena, "alleged" or not. Nor do I have an over-arching moral to draw. It is surely not the case that James' interest in psychic research by itself would provide some watertight response to nihilism. Nihilism is compatible (if it can be said to be "compatible" with anything) with any number of ostensibly paranormal hypotheses and phenomena, including robust ghost sightings, alien encounters, and the ESP that Jerry Coyne hand-waves away. It is important to note that Kripal does not limit his interest in the "paranormal" to these rather lurid examples. His main anecdotes -- which is not a dirty word to him -- are those very private and unverifiable events that steal upon us, or overwhelm us; unsought-for mystical encounters (whether thus labeled or not), like Philip K. Dick's 2-3/74 (which Kripal treats at length in chapter six of Mutants and Mystics), or Nietzsche's Sils Maria realization (which as far as I know he doesn't mention), or his own powerful kundaliniesque experience. What I want to insist upon is not the lab-ready status of such events but the importance of Kripal's claim that in these situations the world seems to be more like a story than like an equation. If, in our modern democratic hearts, despite wanting to lean towards Continental philosophy's concern with the good (and rightly-conducted) life, we don't want to unreservedly embrace the notion of "towering geniuses," the notion of the world itself as "art" seems -- while not a philosophical panacea -- clearly apposite.

"Never trust the artist," says D.H. Lawrence; "Trust the tale."

Thursday, March 10, 2016

Philosophy Departments, Nihilism, Psychic Research, and the Continental / Analytic Divide. And a few other things you didn't think were related. Part i


University philosophy has produced some crucial work, given livelihood to thinkers "important" and garden-variety, and fostered the awakening of generations of students. But we all know it's fucked up; that careerism and the interests of institutions are at best -- and the best is rare -- strange bedfellows with the love of wisdom. Of late this critique got a little bit of publicity in the New York Times' The Stone with an article by Robert Frodeman and Adam Briggle, and then a rejoinder recently by Scott Soames. Frodeman and Briggle write:
Before its migration to the university, philosophy had never had a central home. Philosophers could be found anywhere — serving as diplomats, living off pensions, grinding lenses, as well as within a university. Afterward, if they were “serious” thinkers, the expectation was that philosophers would inhabit the research university.... Philosophers needed to embrace the structure of the modern research university, which consists of various specialties demarcated from one another. That was the only way to secure the survival of their newly demarcated, newly purified discipline. “Real” or “serious” philosophers had to be identified, trained and credentialed. Disciplinary philosophy became the reigning standard for what would count as proper philosophy. This was the act of purification that gave birth to the concept of philosophy most of us know today. As a result, and to a degree rarely acknowledged, the institutional imperative of the university has come to drive the theoretical agenda.
They see a deeper and more pernicious effect of all this:
The implicit democracy of the disciplines ushered in an age of “the moral equivalence of the scientist” to everyone else. The scientist’s privileged role was to provide the morally neutral knowledge needed to achieve our goals, whether good or evil. This put an end to any notion that there was something uplifting about knowledge. The purification made it no longer sensible to speak of nature, including human nature, in terms of purposes and functions. ...Once knowledge and goodness were divorced, scientists could be regarded as experts, but there are no morals or lessons to be drawn from their work. Science derives its authority from impersonal structures and methods, not the superior character of the scientist. ...
Their conclusion is dire:
Philosophy should never have been purified. Rather than being seen as a problem, “dirty hands” should have been understood as the native condition of philosophic thought — present everywhere, often interstitial, essentially interdisciplinary and transdisciplinary in nature. Philosophy is a mangle. The philosopher’s hands were never clean and were never meant to be.....Having become specialists, we have lost sight of the whole. The point of philosophy now is to be smart, not good. It has been the heart of our undoing.
Soames wants to contest all of this -- a far too dreary assessment. He sums up Frodeman and Briggle as having claimed:
that it was philosophy’s institutionalization in the university in the late 19th century that separated it from the study of humanity and nature, now the province of social and natural sciences. This institutionalization... led [philosophy] to betray its central aim of articulating the knowledge needed to live virtuous and rewarding lives.
Soames rejects both contentions.
I have a different view: Philosophy isn’t separated from the social, natural or mathematical sciences, nor is it neglecting the study of goodness, justice and virtue, which was never its central aim.
I wish Soames would address the first part of this argument -- that philosophy is "not separated" from the sciences -- to Stephen Hawking, Lawrence Krauss, or even Bill Nye, or any of the growing cadre of scientists who keep saying silly things like "philosophy is dead." As for the claim that philosophy was never "centrally" about goodness, justice and virtue -- this is to my mind a flabbergasting thing to say. Perhaps we have reached a differend here, but this assertion is, to my mind, a part -- a very large part -- of the problem.

The ever-engaging Brandon Watson at Siris rightly objects that
"Never" is an oddly strong word here -- the claim is certainly false (for instance) of very large portions of ancient philosophy. (Try to imagine a Plato who did not regard goodness, justice, and virtue as the central aim of philosophy. Or what in the world were Hellenistic philosophers mostly talking about if not primarily about "goodness, justice and virtue"?) But you don't have to go back so far. While one can argue about whether it's quite correct to call it "the" central aim, "the study of goodness, justice and virtue" was certainly far more central in the nineteenth century than you ever find it in the twentieth century.
I do not know exactly where Soames, Frodeman, and Briggle fall on the Analytic / Continental spectrum, but it isn't hard to discern the rough outlines of this split in their contentions, or in their curricula vitae. Soames has authored not one but two multi-volume histories of Analytic philosophy; every twentieth-century thinker he names to make his case is arguably from this "tradition," as he calls it, and the easy rapprochement with science which he commends is very much of a piece with Carnap and Quine, Sellars and Armstrong. There are any number of moral philosophers who one could call Analytical, e.g. Anscombe, Rawls, Midgley (who incidentally is a good example of an anti-scientistic Analytic thinker); nonetheless, Frodeman and Briggle's concern with "how shall we live?" -- and they are both very engaged with this question on (or even under) the ground, so to speak -- it is clearly flavored Continental (Frodeman studied under Stanley Rosen and Alphonso Lingis). All of which is meant to suggest that the A/C split may actually have something to do not just with reactions to the "departmentalization" of philosophy, but even perhaps with its origins. We'll come back to this.

Soames thinks he can argue cogently that Frodeman and Briggle have their genealogy wrong, and that "scientific progress" did not
rob philosophy of its former scientific subject matter, leaving it to concentrate on the broadly moral. In fact, philosophy thrives when enough is known to make progress conceivable, but it remains unachieved because of methodological confusion. Philosophy helps break the impasse by articulating new questions, posing possible solutions and forging new conceptual tools. Sometimes it does so when sciences are born, as with 17th-century physics and 19th-century biology. But it also does so as they mature. As science advances, there is more, not less, for it to do.
But the point is not whether philosophers have found material in the sciences, nor even whether they have contributed to scientific discourse (as has arguably been the case). The question is whether this is a good model for philosophy. Socrates clearly believed it was not, and it is worth quoting Plato at some length here:
When I was young, Cebes, I had an extraordinary passion for that branch of learning which is called natural science. I thought it would be marvelous to know the causes for which each thing comes and ceases and continues to be. I thought it would be marvelous to know the causes for which each thing comes and ceases and continues to be. nd I was always unsettling myself with such questions as these: Do heat and cold, by a sort of fermentation, bring about the organization of animals, as some people say? Is it the blood, or air, or fire by which we think? Or is it none of these, and does the brain furnish the sensations of hearing and sight and smell, and do memory and opinion arise from these, and does knowledge come from memory and opinion in a state of rest? And again I tried to find out how these things perish, and I investigated the phenomena of heaven and earth until finally I made up my mind that I was by nature totally unfitted for this kind of investigation....Then I heard some one reading, as he said, from a book of Anaxagoras.... I rejoiced to think that I had found in Anaxagoras a teacher of the causes of existence such as I desired, and I imagined that he would tell me first whether the earth is flat or round; and whichever was true, he would proceed to explain the cause and the necessity of this being so, and then he would teach me the nature of the best and show that this was best; and if he said that the earth was in the centre, he would further explain that this position was the best, and I should be satisfied with the explanation given, and not want any other sort of cause. And I thought that I would then go on and ask him about the sun and moon and stars, and that he would explain to me their comparative swiftness, and their returnings and various states, active and passive, and how all of them were for the best.... How grievously was I disappointed! As I proceeded, I found my philosopher altogether forsaking mind or any other principle of order, but having recourse to air, and ether, and water, and other eccentricities. I might compare him to a person who began by maintaining generally that mind is the cause of the actions of Socrates, but who, when he endeavoured to explain the causes of my several actions in detail, went on to show that I sit here because my body is made up of bones and sinews; and the bones, as he would say, are hard and have joints which divide them, and the sinews are elastic, and they cover the bones, which have also a covering or environment of flesh and skin which contains them; and as the bones are lifted at their joints by the contraction or relaxation of the sinews, I am able to bend my limbs, and this is why I am sitting here in a curved posture—that is what he would say, and he would have a similar explanation of my talking to you, which he would attribute to sound, and air, and hearing, and he would assign ten thousand other causes of the same sort, forgetting to mention the true cause, which is, that the Athenians have thought fit to condemn me, and accordingly I have thought it better and more right to remain here and undergo my sentence; for I am inclined to think that these sinews and bones of mine would have gone off long ago to Megara or Boeotia—by the dog they would, if they had been moved only by their own idea of what was best, and if I had not chosen the better and nobler part. (Phaedo 96a-99a)
Watson does not think much of Soames' case, and he musters some evidence in support of the claim that philosophy was aping the sciences, and psychology in particular:
The first clear, definite philosophy departments arose in response to the formation of psychology departments. The first clear, definite philosophy journals, associated with subject matter studied in departments devoted specifically to what was called philosophy, arose in the same way and for the same reason. It is not an accident that one of the first such philosophy journals, formed in 1876, is called Mind
Watson aptly points out that a bit over a century ago, every academic degree was in philosophy if it was not in medicine, theology, or law. This is, by the way, what Kant's The Conflict of the Faculties is about, and of course, it is why to this day you can get an M.D. as "doctor of medicine" from the university's medical department or a Ph.D as "doctor of philosophy" from just about any other department you care to name -- though there remain equivalent degrees -- ofttimes honorary -- in Law and Divinity.

Why then, did "philosophy" become its own separate academic department? Watson cites the intellectual history traced in Edward Reed's work, From Soul to Mind, a work tracing the development psychology. Watson first pointed me to this book in a discussion on philosophical diversity. Chapter 10 in particular lays out some of this history:
Both modern psychology and modern philosophy – as academic disciplines comprising professional scientists or scholars – began to emerge toward the end of the nineteenth century. Psychology in this sense preceded philosophy by about ten years, although it tended to be housed within philosophy departments. Obviously a great deal of jockeying for position power, prestige, and influence took place. In the United States it was only in the 1890s that philosophers sought to organize specialized journals and started to think about founding a professional society (which did not begin functioning until 1901). In these activities they lagged at least a few years behind the psychologists, and many of the founding documents of ‘strictly philosophical’ institutions explicitly refer to the successes in psychology as one of the reasons for establishing such distinctively philosophical entities. Small wonder that the new professional philosophers latched onto the most provocative antipsychological methodologies available, phenomenology and logic, as defining the activity of members of their emerging discipline. (p 200)
Some may want to push back here by asking, Wait, What about William James? Founder of "Pragmatism," the quintessentially American philosophy, and also author of Principles of Psychology? Even Reed confesses that James is an anomalous figure for his account. James never "took" to the new psychology. He also never approved of the way the "institutional imperative of the university ha[d] come to drive the theoretical agenda" of the humanities, even by his day. (See his "The Ph.D. Octopus," though it is occasioned by an objection to the English department in this case.)

We'll return to James -- in particular, his energetic and now completely neglected enthusiasm for psychic research -- in the next post; for now, there are a couple of things to note about the general picture Reed sketches. The first is historical. The advent of philosophy departments was the part and parcel of the aforementioned infamous Continental / Analytic "split" which followed less than a generation later. A few years ago another student of both Lingis and Rosen, Graham Harman, saw the roots of this divide in Brentano's 1894 lecture on "The Four Phases of Philosophy," which you can read in translation in this book, along with some uneven commentary. Brentano writes:
The history of philosophy is a history of scientific efforts, and it is thus similar in some respects to the history of the other sciences. On the other hand, it is different from the latter and seems rather to be analogous to the history of the fine arts. Other sciences, as long as scientists pursue them, show a constant development which may sometimes be interrupted by periods of stagnation. Philosophy, however, like the history of the fine arts, has always had periods of ascending development and, on the other hand, periods of decadence.
This is of course not just a pair of analogies. The investigations of experimenters and theorists like Newton, Boscovich, Faraday, and Maxwell all fell under the rubric of "Natural Philosophy;" even as recently as 1949, Einstein (at least) could be described, in an echo of archaic usage, as "philosopher-scientist" and there was still a chance that this would be understood.

A propos Brenato's characterization, Harman remarks that: "the entire analytic/continental rift is foreshadowed and explained in this passage." It plays out on the level of process, arguably, even more than it does on the level of doctrines promulgated. The usual argument is that analytic philosophy tries to apply the hard sciences' methods to philosophical problems, or that it tries to back up and do meta-science. But Harman thinks the problem is, rather, that Analytic philosophy tries to model itself upon what it sees as the history of science; while Continental philosophy is animated by a particular myth of the arts:
The difference between the two currents... isn’t so much one of content as of professional mission and self-understanding. Analytic philosophy is deeply committed to the idea that philosophy is a cumulative enterprise, and that the adding up of small discoveries will lead to a general professional advance.... By contrast, it seems pretty clear that continental philosophy follows the “fine arts” model of the history of philosophy…. The progress of philosophy is made not of cumulative argumentation but by the vision of towering geniuses.
Now, it is easy to see how this analysis resonates with Reed's case, and Watson's revisionary use of the case, that philosophy as academic discipline was responding to the rise of the "new psychology." If Brentano's lecture points to an apparent tension in philosophy, the crisis pressed upon academic philosophy by departmentalization turned this tension into a schizoid structure. Not only would philosophers now decide "what would count as proper philosophy" by virtue of who was in or outside of institutional circles, they would wind up squabbling within the institution as well, until Analytic philosophers would deny that Continental philosophy was philosophy at all, and we got things like Quine signing a letter protesting Derrida's receiving an honorary degree, claiming that "In the eyes of philosophers, and certainly those working in leading departments of philosophy throughout the world, M. Derrida’s work does not meet accepted standards of clarity and rigour."

So much, then, for the historical question.

But a deeper and more unsettling question also arises. If we play this history forward, perhaps we are now living through the endgame of this untenable arrangement? That will be the starting point for Part 2.