Future, Present, & Past:

~~ Giving itself latitude and leisure to take any premise or inquiry to its furthest associative conclusion.
Critical~~ Ready to apply, to itself and its object, the canons of reason, evidence, style, and ethics, up to their limits.
Traditional~~ At home and at large in the ecosystem of practice and memory that radically nourishes the whole person.

Oυδεὶς άμουσος εἰσίτω

Wednesday, November 14, 2012

On post-humanism

It is easy to conceive of an animal more intelligent than man, that is to say, capable of more ingenious inventions in order to achieve its final conditions, to act according to valences in its vital domain -- but remaining, nonetheless, in a vital domain without acceding to the world of values and meanings, i.e., to the human level.
--Raymond Ruyer, "The Vital Domain of Animals and the Religious World of Man."
And, one might add, it is sometimes easy to imagine the human species creating, or indeed even becoming, such an animal.

Ruyer is not much known in English, it seems; I've been able to find only a few essays. There's a case to be made that he was a strong influence on Deleuze. You can find some other pointers about him at Neofinalism and courtesy of the apparently indefatigable Taylor Atkins, here and here.


  1. A case study in why I loathe so much religious thinking. Ruyer's artificial boundary between "animal" and "man" and his equating of the "world of values and meanings" with the "human level" is as bogus as as the "New Geocentrism" but unfortunately much more accepted.

    Humans *are* animals and we inherited not just dark survival instincts but also our compassion from our evolutionary legacy. This guy needs to read Mark Bekoff and Franz de Waal.

    I'd like to read Skholiast's take on what (to me) appears to be a sorely outmoded Cartesianism.

  2. I will have served Ruyer ill if I leave anyone with the impression that his thought is just the bad old cartesian "apartheid". He's actually attempting to lay out a philosophy of life that is neither vitalist nor reductionist. He calls his view of consciousness "the inverse of ephiphenomenalism," which is a slogan I could adopt gladly. Look into chapter 16 of the book I link to above (the "case to be made") re. Ruyer and Deleuze.

    Honesty compels me to add, however, that I do not share the allergy to hierarchy, or the default preference for rhizomes that the anti-cartesian stance seems to rest upon. That there are relevant senses in which human capacities are qualitatively different from, say, those of the sea anemone or the tree sloth (to say nothing of the sea or the tree) is not just a passé dogma whose adherents we are now in a position to pity or revile. (Tell me who you dismiss, and I will tell you who you are. OK, No, not really. But I can do a good caricature.) It is de rigueur today not to gloss that difference (between humans and, e.g., sloths) with the word "higher." But then, note that Ruyer does not do so (in the bit I quote) either, (well, he does use the word "level") and he was writing in the 1950's, long before Bekoff or de Waal. In any case, I think the re-thinking of hierarchy and "flatness" (to use the current catchword) is a pretty urgent concern for ontology. It just won't do to collapse everything, or to put the human race at the tippy-top of the great chain of being.... but then, no one ever did that, really.

    Having said this, I hasten to add that there is no question that we can indirectly infer the existence of, say, empathy or other emotions in the subjective experience of animals, and of social "mores" if one wants to put it that way, too. You don't need to be an ethologist to recognize this -- just having a pet cat will do (nor will I concede that all such recognition is a sign of raging anthropomorphism).

    But I actually was not thinking about animal experience at all when I posted the above; I was thinking of what I consider to be overblown expectations about A.I. It's possible that a machine will one day persuade me that it its feelings are hurt because I won't believe it has feelings -- wait, er, don't hold me to that -- but the point is that vast computational intelligence and prudential self-correction does not equate to caring. Moreover, if Steven Pinker is right (at least partly) to call fictions "empathy technology," meaning that certain modes of media can augment our natural capacities to relate to other beings (human and non-), then (I extrapolate) so too can other modes hamper, hobble, and conceivably eliminate such empathy, an admittedly dystopian eventuality that I was hinting at in the end of the penultimate paragraph above.

  3. I am reminded of a quote by H.H. the Dalai Lama "Humans have the potential to develop infinite Love and Wisdom." as a reminder that it has always been easy for people to conceive becoming considerably smarter even while in the same human body.