MarkusQ 8 hours ago

Both this and the underlying system of fact checking are ignoring the elephant in the room: we have no direct access to the truth. Instead, all we can do is check for consistency. This can be either internal (if I say "two is even" and later "two is odd" I must have lied at least once) or with external source (e.g. look it up somewhere, or ask an expert).

The best external source is reality, if you can corner it with a well designed experiment; this is, unfortunately, really, really hard.

Established theories are also good (but, as history has shown, can be wrong). The biggest problem with theory-based fact checking is that our best theories generally come in pairs that make conflicting claims or are otherwise inconsistent. Plus, the proper application of theories can often be a minefield of subtlety. So this comes down to a choice of "pick the theory that gives the answer you like" or "trust the experts" (e.g. argument by authority).

That leaves us with the most popular option: compare the claim against some consensus (and it happens to be correct). This is generally easy, and works great when there _is_ a consensus, which leads us to overestimate its reliability. And thus we waste years exploring amyloid beta plaques, looking for dark matter, teaching whole-word reading, and so on.

It would be great if we had an easy way to tell who's lying, but in fact what we've got is a lot of ways to tell who we agree with and who we don't, and we don't always agree with each other on that.

  • h-bradio 8 hours ago

    OP here! Thanks for calling out this important point. As I fact-checked each claim, I was surprised at how many of the checks were "does the paper he's citing say what he says it does?" You can see them here: https://fact-check.brady.fyi/documents/3f744445-0703-4baf-89...

    • MarkusQ 6 hours ago

      Yeah. And that's really important; If someone makes a correct claim by accident, say they misread a paper that incorrectly claims X as correctly claiming not-X, we shouldn't consider it evidence that they are trustworthy or honest, just lucky.

      But then you have cases where someone correctly cites a source that they know to be incorrect (or at least plausibly should know). This is commonly done when flawed studies are funded specifically so they can be cited. This is arguably even more egregious lying, yet would pass a consistency based "fact check".

      Likewise, the factual claim ("eight out of ten doctors surveyed recommend smoking brand-x") can be true while the implication is false.

      In short, I'm not claiming such checks can't catch liars (they can), just that passing such checks doesn't mean they were telling the truth or what they said or implied was correct.

poulpy123 8 hours ago

Thinking you can objectively quantify the degree a politician is lying is a mistake. Obvious, open, fact-checkable and relevant lies are the minority.

  • h-bradio 8 hours ago

    OP here! Going into it, I definitely agreed and thought that easily fact-checkable claims would be the minority. But as I worked, I found that many of his claims were "this paper says this". So checking the claim was as simple as checking "does the paper he's citing say what he says it does?" You can see them here: https://fact-check.brady.fyi/documents/3f744445-0703-4baf-89...

ygritte 11 hours ago

Donal Trump was actually not topmost liar at the time of sampling, but only 2nd place. Color me surprised.

  • prasadjoglekar 11 hours ago

    Well, "fact checkers" like Politifact are precisely what are considered biased themselves. Sampling from a biased dataset still shows the same bias.

    https://dukespace.lib.duke.edu/items/8f9a6f3b-efd7-46f3-b4be...

    You may be aligned with the alleged or real partisanship of Politifact, so to you there's no problem here. But team Harris and Buttigieg lost the election.

    Hence these consequences (from Wikipedia):

    In January 2025, Mark Zuckerberg announced an end to Meta's eight-year partnership with PolitiFact, claiming that "fact checkers have just been too politically biased."[62][63

    • noelwelsh 10 hours ago

      This is a great example of the issue the blog post is addressing, namely:

      > The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

      The play book is:

      1. Set an impossible standard (an undefined "unbiased" fact checker)

      2. When impossible standard cannot be reached, throw toys out of the pram

      Meanwhile, egregious levels of bullshit now go unchallenged.

      • brookst 9 hours ago

        Yeah it’s just the seat belt fallacy: seatbelts are useless because people still die in car crashes.

        Somehow our whole society has fallen for the “unless you can point to a perfect saint who has never done any wrong, we might as well be led by active criminals” pitch. It’s so nihilistic.

    • ImPostingOnHN 9 hours ago

      Most everything is "considered biased" by some people. In this case, Zuckerberg and the Bain employee who authored that report are indeed people -- 2 out of billions.

      Consider an alternative framing, "fact checkers like Politifact are precisely what are considered UNbiased". It is at least as true (because at least 2 people consider it to be so).

      Given that framing alternative to yours: what, if anything, should we do anything about the situation?

      How do you think framing, rather than substance, affects that discussion?

    • littlestymaar 10 hours ago

      > In January 2025, Mark Zuckerberg announced an end to Meta's eight-year partnership with PolitiFact, claiming that "fact checkers have just been too politically biased."[62][63

      No relationship with the fact that Trump became president again in Jan 2025 with Zuckerberg giving money to his inauguration, obviously.

  • alanbernstein 10 hours ago

    The "falsiness distribution" by itself is not capable of answering this kind of question. Imagine a politician who speaks just one statement, a "pants on fire" lie. They immediately reach the top liar spot.

    The distribution also leaves out the significance and the reach of the statements.

    Your statement is about as meaningful as the "fastest growing <whatever>" trick. E.g. growing from 0->1 user is infinite growth, so wins fastest growing immediately.

    • h-bradio 8 hours ago

      OP here -- thanks for your reply! You're exactly right! I included the NYT/PolitiFact graph at the top as an example of that problem. In the second half of the post, I propose what I think could work a little better (sampling comparable speeches and fact-checking the entire text).

    • superxpro12 9 hours ago

      If this were ESPN or similar, they would say "min 50 games" or something to sort the outliers (heh).

cladopa 10 hours ago

[flagged]

  • andhuman 9 hours ago

    The graph is from 2007 onwards. So it doesn’t include when he was president.

  • aredox 10 hours ago

    You mistake average and dispersion.

    That's why "data" is not "anecdotes".