The Fail State of Algorithmic Distributed Truth Detection

Nathan Allen
7 min readFeb 24, 2021

--

Distributed Truth Detection

Quora relies on users to ask/answer questions, and users to vote on which answers are “best.” The quality of the system is the scale. If ten people participate, the system will fail. But, the hypothesis goes, if 10 million people participate, the system will succeed. Good questions and accurate answers will surface. The truth will be known.

Similarly, ratings systems such as Amazon’s relies on the same assumptions and architecture. Five product reviewers acting in bad faith are irrelevant if 1,000 people review a product. The five bad faith reviewers will be algorithmically marginalized.

Facebook has an adjacent premise: bad faith actors (propagandists, pedophiles, etc.) are minimized (and often identified) by their anomalous behavior but, as distributed actors, are incapable of systemic corruption.

Bitcoin likewise functions on the same premise: widely distributed miners incentivized by the same objective (“money”) perpetuates the credibility of the system (chain/consensus).

The grand Web 2.0 hypothesis is that authority can be distributed (there’s another question as to whether distributed authority is authority at all) or algorized such that human authority, particularly in concentrated form, is unnecessary. This distributed authority (or non-authority) supplies truth/confidence and, much more importantly, decreases costs. Newspapers rely on concentrated human authorities (editors); Web 2.0 replaced them with system architectures that may have larger up-front costs but have lower long-term running costs and are infinitely scalable. Scaling is necessary for both systemic credibility and profit.

Of course, the grand Web 2.0 hypothesis has failed in one regard. Sure, if you scale something enough, you can make a trillion dollars. But truth is not surfaced; it’s sacrificed.

Distributed Authority Fallacy

All such distributed systems are corruptible.[1] They are distributed by design but not by necessity, which means it’s possible by design to corrupt them. Each relies on large scale, with either logistical improbability or incentive or both to catalyze the distributed scale required to negate aggregation efforts. However, all such systems can (and have) failed (not necessarily permanently or systematically).

All such systems are prone to aggregated attempts to seize the truth mechanism. One can buy Amazon reviews at scale, the CCP targets Quora to corrupt the Q&A process, Facebook users have, on average >4 accounts (Facebook doesn’t really know), and bitcoin’s key credibility mechanisms (consensus, double-spending avoidance, etc.) are corruptible.

The perpetuation of and chain credibility in bitcoin is controlled by the miners, who are incentivized by new bitcoin. In short, miners are rewarded by the system for maintaining the system (which is a variation on crony capitalism wherein corporations reward politicians for maintaining the system that rewards corporations).

Such a system can be internally robust as long as the full-range of optimal outcomes exist within the system (for example, such a system cannot be expected to discover an external truth) and systemic defects are not within the system (external defects don’t necessarily destroy a system). For bitcoin, take not the 2013 example when over 50% of bitcoin was mined by one group, thus controlling the authority mechanisms and enabling corruption of the system. (A group with such control could deny others’ new bitcoins or payments or approve double-payments; it’s mob-rule without a Constitution.) Bitcoin (and Web 2.0) dodged a bullet because the majority group was incentivized by bitcoin’s internal mechanisms; they wanted bitcoin to succeed, so they didn’t corrupt it.

But consider that in 2019, 70% of bitcoin mining occurred in China. Technically, there’s no way of knowing whether that majority was controlled by a single person or entity (legally, it was; the beneficial owner of all in China is the CCP). One can obviate any technical mechanisms embedded in the system to detect or deter aggregation to obscure ultimate control. How do we know that all the bitcoin miners — though operating through different IPs and ISPs and across thousands of computers — weren’t actually one person or entity leveraging the system’s lack of entity resolution? Systemic entity ignorance may be a feature — until it’s a bug.

All of these problems — Quora, Amazon, Bitcoin — suffer from an entity resolution problem and are vulnerable to entity obfuscation attacks. This is not a new problem. Banks and legal systems long ago recognized the issue (hence the concept of ‘beneficial ownership’) and mobile phone companies did after 9/11.

But entity resolution is an unresolved problem, which means it’s a human, not algorithmic, problem. Granted, Facebook is not incentivized to solve the problem given its business model, but it’s worth noting that Facebook hasn’t solved the problem. Further, those human failures — when the system fails at entity resolution — are then resolved by the legal system where/when necessary. The courts are full of cases attempting to resolve beneficial ownership and related entity problems (including cases specific to YouTube, Facebook, etc). (The “independent” in “independent judiciary” means true redundancy such that the initial solution — entity resolution, for example — may fail, but the system doesn’t fail.)

It is, in fact, easy to spread misinformation on Quora and Amazon, to terrorize people on Facebook and Youtube, to gain control of bitcoin’s basic mechanisms. By “easy” I mean: (1) it’s obvious how to do it, and (2) cheap.

Sure, it’s not cheap for a guy in a basement in Kansas City operating on a desktop bought in 2012 on sale at Best Buy, but it’s cheap for a paranoid state actor who can and will throw $20 billion at a perceived threat. (Keep in mind that authoritarian regimes have no credibility other than their asserted competence at resolving threats by whatever means necessary). It’s dirt cheap for the CCP to corrupt Quora or gain control of bitcoin if it wanted to (yes, the CCP has railed against the evils of bitcoin … and you believe the CCP?) The irony is that such systems are corruptible by the exact people who they are often intended to circumvent or obviate.

Outside of a dynamic entity resolution solution and supporting legal system (redundant concentrated authorities), systems are corruptible even at scale. It’s ironic that the distributed systems (with distributed or effectively no human authority) are designed to withstand human corruptive efforts and yet are even more prone to them. It’s as if Web. 2.0 ignored Genesis 3.0.

The Target Acquisition Problem

The distributed-truth hypothesis is premised on the purely Western notion that truth is an objective, a thing unto itself, and that the antidote for untruth is truth.

But if you wanted to destroy truth — if you wanted to corrupt Google’s data collection objectives — then you don’t counter with another argument or a counter-narrative of some sort. Instead, you flood the pipe with noise. The objective is not to seize the signal but to destroy the signal. And you destroy the signal by obfuscating it with noise. (1920s propaganda was counter-narrative/”new” truth; 2020s propaganda is destruction of truth as a concept.)

The CCP flooding the pipe with noise; they don’t intend for you to believe any of these claims…as long as you stop believing your own government. A critical mass of ulterior motive can subvert any distributed truth/consensus system … such as bitcoin or the media.

If truth is a target you wish to destroy, you create a target acquisition problem. (In the military, you can either obscure a target by hiding it or by flooding the field with targets; the CCP has specifically said that this is their solution to the U.S.’s missile defense system. They will not build a missile capable of successfully navigating the U.S. missile defense system; instead, they will launch thousands of missiles to overwhelm the system.)

Flooding the pipe with noise leaves people thinking their lives are built on quicksand, and they’ll cling to whatever seems stable, even an authoritarian regime. Surrounded by lies, by enemies, by noise, people will subvert themselves unto the seeming stability of the strong hand.

With bitcoin/crypto, noise isn’t bad data but rather bad intention — a critical mass of ulterior motive. And yet, would someone really invest so much into bitcoin only to destroy it? Would thousands of people write hundreds of thousands of Quora Q&As not to establish a counter-narrative but to destroy Quora as a source of truth? Of course they would; they already have.

The solution is as it was: (1) entity resolution (you know the user’s true identity) backed by a redundant resolution system (the law) combined with (2) gated participation (no, the wumao may not write Quora answers or leave Youtube comments).

If you’ve gotten this far, then you’ve reached the truly controversial argument. Truth is not a Western concept; it’s an American concept. When and where does truth bubble up to the surface of politics, law and media (anywhere on earth)? When and where does truth become an objective of media, a legal defense, a core ingredient to political dialogue? A: Seventeenth century New England.

I’d write a Quora answer to that question, but the CCP would flood the system with noise about the Chinese inventing truth 5,000 years ago in that ancient part of China called Boston.

This is the CCP’s official top propagandist spreading official CCP noise. He doesn’t expect you to take him seriously. He just wants to you think your own authorities are lying to you. (https://en.wikipedia.org/wiki/Zhao_Lijian)

[1] What about Wikipedia? Wikipedia, much like Amazon and Quora, gains credibility by not always being wrong. All have just enough veracity to convince you of general credibility so that you don’t notice that Wikipedia’s article on inoculation is factually wrong and comically biased or that the article on Behe contains as much truth as late Nazi propaganda.

--

--

No responses yet