Understanding Aristotelian-Thomism and Edward Feser

Whenever you’re trying to understand someone, sometimes you have to bridge several preceding and hidden gaps in knowledge before you can reach the level of knowledge necessarily to understand what they’re plainly saying. This has been my experience in trying to understand Edward Feser.

I’m an extreme empiricist, a mechanist, a moderate physicalist, and nominalist. Edward Feser is an extreme rationalist, hylomorphic dualist, and moderate realist re universals.

How do I even begin to understand what Feser has to say about Aristotelian-Thomism (A-T) when it presupposes all of Feser’s positions which are in near direct opposition with my own positions? I obviously have to start at the beginning with rationalism vs. empiricism, but I don’t see any note-worthy arguments against empiricism (at least in the meaningful sense), and that being the case shouldn’t I ignore everything else Feser has to say before that debate is settled? That is, if rationalism is wrong, then Feser’s other positions which assume rationalism are also likely wrong. At this point, trying to understand A-T is like assuming teleportation is possible so that I can reason about a person in Canada murdering a person in Japan.

So, maybe I should just sit back in my fortress of empiricism waiting to see if any sieges against it are successful while ignoring everything else. But I could be missing out on something quite important like heaven and hell. Given the stakes proactivity seems warranted, but then again this is just form of intellectual blackmail.

And interestingly I have reasoned away from many confident and strongly held positions before, but the process and arguments were hugely more apparent and acceptable. And this makes me skeptical of Edward Feser; I begin psychologize him and try to uncover his journey to his currently held positions rather than attempting that journey myself, but this also seems reasonable given that I’m unable to even make it past the first pitfall in that journey. That might sound deferential, but it isn’t intended to be; It’s very possible that Feser may have poorly reasoned foundational positions upon which his more complete and reasoned worldview rests.

And that being said, I can see A-T is logically valid (as opposed to sound). It makes sense as a model of reality, but then so do other very clearly ridiculous worldviews. What I’ve come to understand as being important in a worldview is that it fundamentally squares with observed reality (oh fuck, that’s empiricism!). Logic, for example, squares with reality. It starts with observations as axioms allowing for you to emulate reality at the level of intellectual (or computational) abstractions. However, it’s clearly not perfect as there are logical paradoxes of which do not happen in reality (or so it would seem; paraconsistent logicians say otherwise). I suppose Feser might say that this is necessary artifact of lossy compression in transferring intellectual universals to language (i.e., we intuit that paradoxes are impossible, but why isn’t that intuition apparent in logic itself). But even so logic clearly makes predictions that almost entirely accord with reality and our intuitions. How then does A-T do the same? I don’t think it supposes that it should, because again it presupposes rationalism in which such accordance is argued to be unnecessary.

So, what the fuck should iI do? I guess I should just keep hacking away towards an understanding of Feser’s position. Although, understanding a hugely complex and flawed position might be near impossible for me (assuming that it is flawed) — like trying to understand why the holocaust never happened.

1 Comment

Filed under Two Cents

Moral Intuitions

Humans have moral intuitions. We feel that actions vary from being very wrong to very good. These feelings are derived from evolution and culture. Some cultures imbue different feelings in their inculcates: Haitian Christians are “righteously” murdering homosexuals while Americans feel it morally necessary to give them the right to marry. Moral intuitions aren’t consistent across humanity.

Still, strip away the layers of cultural development and something is left: Human universals — of which anthropologists have cataloged dozens. I suspect for any secluded tribe of which there has been little cultural development what remains is humanity’s moral universals. For instance, I doubt murdering at a whim is considered morally good, anywhere. No doubt that random killings happen, but is it considered a moral good? Actually, the more I think on it, the less sure I am. I can imagine cultures where random killings are considered morally neutral — even killings amongst their own. This might be the case in war torn areas of Africa where child soldiers roam the country sides pillaging and murdering. At the least, if there are moral universals, I’d be surprised if they weren’t easily overridden by culture. Or perhaps moral universals are less obvious from a Western perspective. Perhaps something like acquiring status is a moral universal. Perhaps, universally, it’s morally good to execute higher status actions. High status does seem to be universally defined as a good. Even in communist and socialist circles there is status. Whatever the case, morality isn’t immutable, and culture is the dominating factor.

What does this say for normative ethics? Can there be a consistent and coherent way of morally evaluating an action? Is there a complete system of morality from which our current intuitions approximate? I suspect not. I suspect moral intuitions are developed by a cultural system of heuristic building. “This heuristic works well enough for our society, and so our citizens should be expected to adopt it lest they lose status.” And ad infinitum. What you’re left with is a patchwork of heuristics that help society flourish — this may be what evolution brought to the table. But, wait, can’t this process be optimized? Can’t it be turned into math?

The answer is “yes, but.” But once you take this process into the realm of math and make it optimal, it takes on another shape. A shape that humans weren’t evolved to intuit, like quantum-level effects. Maximizing average or total preferences, happiness, ideals, and so on becomes the goal and consequently leads us to strange conclusions. Conclusions we won’t accept. But, wait, average and total utility? Does that make any sense given what we know about morality? Morality is a system for guiding interactions among people. It’s not meant for individuals. What’s good is what’s good for society — not for all the individuals in that society. We’re doing the wrong math!

1 Comment

Filed under Running Thoughts