Re: virus: Rationality in the Cave

KMO (kmo@c-realm.com)
Sun, 14 Mar 1999 18:56:52 -0800

Eric Boyd wrote:

> From: David McFadzean <morpheus@lucifer.com>
> <<
> True enough, I likely wouldn't believe your story. Occam's razor isn't
> perfect but I'm willing to be wrong some fraction of a percent of the
> time. The only alternative is to be wrong more often than that.

Why do you say that? You assert that the explanation which is less Occam-friendly will only be the more servicable option "some fraction of a percent of the time," (which I take you to me "almost never" even though 7/8s of 99% is a fraction of a percent). What makes you think this is the case?

It seems to me that for any accurate explanatory account of an event there are going to be a number of alternate explanations that take fewer variables, factors, and distal causes into account. The more simplistic explanations, postulating fewer theoretical entities and fewer relationships between those entities, will always fare better against Occam's razor than the story that I've specified as being an "accurate explanatory account." It seems to be an open question as to how often such simplistic accounts are pitted against more accurate but also more complex rivals.

> Is
> that somehow better? I'm assuming that being right or wrong has some
> real consequences in these situations. If not, it is better to suspend
> judgement.
> >>

If you claim to be able to employ a seemingly useful strategy that is based on a set of axions on which you have consciously suspended judgement, somebody might accuse you of claiming to be on level 3.

> What you're looking at here is the classic choice between type one and
> type two errors (in stasticial hypothesis testing). Allow me to
> explain:
>
> Type one errors (alpha) result when you reject a hypothesis even
> though it is true (as above). A type two error (beta) occurs when you
> do not reject a hypothesis even though it is false.
>
> The problem is that these two error are related -- if you decrease the
> probability of an alpha error, the probability of a beta error goes
> up, and vice-versa.
>

Cool. I can imagine this thread winding its way into a territory in which the alpha mistake/beta mistake distinction is a handy navigational device. Thanks for droppin' some science on us, Eric.

> Now, depending on your outlook, you can decide which of the two errors
> would be worse, and tip the scales accordingly.

Wow! You can do that? Right on. Do you have some managable algorithm for calculating the expected utility of an alpha mistake and a beta mistake to determine when and which way one should tip the scales? Maggs may recall from logic class some methods for calculating expected utility, but they aren't particularly managable, especially if you don't have a pen and paper and a few minutes to perform the operations.

Perhaps I'm assuming some uncessary limitations on our cognative abilities. I can imagine someone developing a training regime that could reliably teach people of "average intelligence" to perform these formal operations in their heads quickly enough to use on the fly and in real time.

> A balance is of
> course best, but I can see situations in which tipping them slightly
> one way or the other could be benefitical.

Do you make the decission to tip the scales intuitively? Do you perceive a comfortable semantic distance between "having confidence in the reliablity of your intuitive judgements," "trusting your intuition," and "having faith that you will intuitively know when to tip the scales and which way to tip them?"

> Deep problems occur if the scale tips too far in one direction.
> Avoiding alpha errors at all costs makes one gullible, while (horror
> of horrors!) avoiding beta errors at all costs makes one dogmatic.

And gullible or dogmatic, you're easily manipulated.

-KMO