There /is/ an `irrational' version of the reductionist analysis
implied by the above. Assuming that an irrational mind can model the
probable results/future of a given action (and since NNets do
equivalently the same thing on a smaller scale without reductionist
analysis, it would seem clear its /possible/), then you can get the
same results, doing potentially fatal things, via modeling and
discarding unfavourable paths. This doesn't require you to break down
possibilities /at all/, nor build up rationales, it only requires you
to be sufficently holistically complex to model favourability, to
assign some kind of feasibility ranking via consulting probable
outcome models.
Where the rational mind takes a situation, breaks it down into
discriminants and attempts to double check axiom/proof flow, the
irrational `CRC,' if you will, is just that; the irrational mind
cranks the input through a black box, looks at the output and decides
if the outcome is favourable.
Logically, and rationally, if you accept the axiom `sometimes you're
wrong' then you recognize that sometimes your rational axioms will be
broken; there's no way around that situation save through
irrationality.