This analysis explores the relationship between theoretical reasoning and empirical observation in the pursuit of knowledge.

While I appreciate various approaches to understanding the world, my perspective aligns more closely with empiricism than pure theoretical reasoning. I have colleagues who focus on systematic thought processes, which I respect as a valid intellectual pursuit. We each have our preferred methods of investigation and analysis.

However, it’s important to examine the limitations of purely theoretical approaches. Even the most rigorous logical framework can produce unreliable conclusions if built upon incomplete or inaccurate premises. The strength of an argument’s logical structure cannot compensate for foundational weaknesses in its empirical basis.

My preference for empiricism stems from skepticism toward arguments that rely primarily on intuition or conventional wisdom rather than verifiable data. Such approaches, despite their logical appeal, risk reinforcing existing assumptions rather than advancing our understanding.

This methodological divide raises interesting questions, particularly in contexts where theoretical and empirical approaches intersect. For instance, it’s worth examining why some communities that generally prioritize evidence-based thinking might still favor theoretical approaches in certain domains.


Let me stop here and make it clear that I’m not rejecting logic or critical thinking. There is a famous history with Einstein. Let me tell you a story of famous astronomer, Sir Arthur Eddington. In 1919, Eddington led an expedition to the island of Principe off the coast of West Africa to observe a solar eclipse and test a key prediction of Einstein’s general relativity—that gravity would bend the path of starlight passing near the sun. There is an anecdote that a journalist asked Einstein what he would do if Eddington’s observations failed to match his theory. Einstein famously replied: “Then I would feel sorry for the good Lord. The theory is correct.”

It seems like a rather foolhardy statement, defying the trope of Traditional Rationality that holds experiment above all as sovereign. Einstein appears to have possessed an arrogance so great that he would refuse to bend his neck and submit to Nature’s answer—as scientists must do. Who can know that the theory is correct in advance of experimental test?

Of course, Einstein did turn out to be right. I try to avoid criticizing people when they are right. If they genuinely deserve criticism, I will not need to wait long for an occasion when they are wrong.

And Einstein may not have been quite as foolhardy as he sounded… Let’s dive into the problem from an information theory perspective.


Bits of Evidence and Hypothesis Testing

To assign more than 50% probability to the correct candidate from a pool of 100,000,000 possible hypotheses, you need at least 27 bits of evidence (or thereabouts).
You cannot expect to find the correct candidate without tests that are this strong, because lesser tests will yield more than one candidate that passes all the tests.
If you apply a test with only a million-to-one chance of a false positive (roughly 20 bits), you’ll end up with a hundred candidates. Just finding the right answer within a large space of possibilities requires a large amount of evidence.

Traditional Rationality vs. Bayesian Perspective

Traditional Rationality emphasizes justification:

  • “If you want to convince me of X, you’ve got to present me with Y amount of evidence.”
  • I often slip into phrasing such as: “To justify believing in this proposition at more than 99% probability requires 34 bits of evidence.”
    Or, “In order to assign more than 50% probability to your hypothesis, you need 27 bits of evidence.”

This traditional phrasing implies that you start with a hunch or a private line of reasoning that leads you to a suggested hypothesis and then gather “evidence” to confirm it—to convince the scientific community or justify saying that you believe in your hunch.

But from a Bayesian perspective, you need an amount of evidence roughly equivalent to the complexity of the hypothesis just to locate it in theory-space.
It’s not merely a question of justifying anything to anyone. If there are a hundred million alternatives, you need at least 27 bits of evidence just to focus your attention uniquely on the correct answer.

This holds true even if you call your guess a “hunch” or “intuition.” Hunches and intuitions are real processes in a real brain. If your brain doesn’t have at least 10 bits of genuinely entangled, valid Bayesian evidence to chew on, it cannot single out a correct 10-bit hypothesis for your attention—whether consciously or subconsciously. Subconscious processes can’t find one out of a million targets using only 19 bits of entanglement any more than conscious processes can.
Hunches can be mysterious to the huncher, but they can’t violate the laws of physics.

You see where this is going: At the time of first formulating the hypothesis—the very first time the equations popped into his head—Einstein must have had, already in his possession, sufficient observational evidence to single out the complex equations of General Relativity for his unique attention. Otherwise, he couldn’t have gotten them right.

Now, how likely is it that Einstein would have exactly enough observational evidence to raise General Relativity to the level of his attention, but only justify assigning it a 55% probability? Suppose General Relativity is a 29.3-bit hypothesis. How likely is it that Einstein would stumble across exactly 29.5 bits of evidence in the course of his physics reading?

Not likely! If Einstein had enough observational evidence to single out the correct equations of General Relativity in the first place, then he probably had enough evidence to be damn sure that General Relativity was true.

In fact, since the human brain is not a perfectly efficient processor of information, Einstein probably had overwhelmingly more evidence than would, in principle, be required for a perfect Bayesian to assign massive confidence to General Relativity.

“Then I would feel sorry for the good Lord; the theory is correct.”
Viewed from this perspective, the statement doesn’t sound nearly as appalling. And remember, General Relativity was correct, chosen from all that vast space of possibilities.


Methodological Considerations

The challenge lies not in the value of logical reasoning or critical thinking themselves, but in how we position these tools within our broader pursuit of knowledge. While these skills are essential components of intellectual inquiry, they should be viewed as part of a more comprehensive methodological framework.

Individual vs. Collaborative Knowledge Building

The development of knowledge benefits most from a collaborative approach that combines:

  • Rigorous theoretical frameworks
  • Empirical validation
  • Peer review and critique
  • Historical context and precedent
  • Diverse perspectives and methodologies

This collaborative model has proven more effective than purely individual approaches to knowledge building. Scientific progress, for instance, relies heavily on:

  • Building upon existing research
  • Methodological refinement through peer review
  • Cross-disciplinary insights
  • Empirical validation of theoretical work
  • Institutional knowledge sharing

Common Methodological Pitfalls

Several challenges can arise when theoretical approaches are not sufficiently integrated with empirical validation:

  1. Limited Source Material:
    • Relying too heavily on secondary sources
    • Not engaging with primary research
    • Insufficient consideration of historical context
  2. Methodological Isolation:
    • Working without peer review
    • Neglecting existing scholarship
    • Not testing assumptions empirically
  3. Cognitive Limitations:
    • Overconfidence in logical frameworks
    • Insufficient appreciation of uncertainty
    • Underestimating the complexity of systems

Moving Forward

A more balanced approach would:

  • Recognize the complementary nature of theoretical and empirical work
  • Emphasize the importance of collaborative knowledge building
  • Maintain epistemic humility while pursuing rigorous analysis
  • Value both individual insight and collective wisdom
  • Integrate multiple methodological approaches

This framework allows us to benefit from both logical rigor and empirical validation while avoiding the pitfalls of over-reliance on any single approach.