One can create a machine that requires inputs that don’t exist….
The performance bounds of any given algorithm can be generally determined by the upper and lower bounds of the inputs (data, bounded by quality and quantity) so that the potential performance of the machine can be described as a range of the best possible inputs to the worst possible inputs, both tempered by qualitative observations of such data (in other words, not a theory of what’s “best” and what’s “worst” but an observational assessment of the range of possible inputs).
So one could build a theoretically high-performance machine that is practically low-performance because of the inputs. (A Ferrari is not a high-performance machine, as ‘output,’ if the fuel isn’t available.)
It’s rather quite obvious that the performance of any machine is bounded by the range of inputs.
Truly complex systems sometimes conceal the often inscrutable nature of their complexity. But let’s apply the same bounded-performance observation to a complex system: a school district.
Holding some variables constant (processes, students, etc), the performance bounds of the district can be determined by the upper and lower bounds of the other inputs. So, the potential best and worst performance of a district can be determined by the potential best and worst teachers and administrators that the district could hire (again, realistically, not theoretically). Just as any machine’s performance is bounded by the possible range of practical inputs, any school district’s performance is bounded by the possible range of employees, which is subset of the general population.
Given that we have the potential range for other variables (processes, students, etc) across the United States’ 15,000+/- school districts, and that range is knowable over time, and given that these 15,000+/- school districts likely employ every possible realistic combination of hirable teachers and administrators in the context of every possible realistic combination of the other variables, we can then determine the upper and lower bounds of total performance.
The observation to extract from this is that regardless of process or other input variables (i.e. students), school districts are performance bounded by the population of hirable teachers and administrators. In other words, one cannot expect the district to perform better or worse than the possible range teachers it can hire (in the context of the possible range of other variables).
So, the issue with input bounded systems is that one can create a system that is input bounded to poor performance. The upper bounds of all other inputs may not adequately supplement the upper bound of all inputs, regardless of the upper bounds of, for example, “the processes.” To observe input bounds (geography, poverty, etc.) is to observe functionally irrelevant data if the systemic performance bounds are informed by a functional constant. (Though obviously not a constant, population behaves as a constant because it is not affected by the system in question. For example, if one changes the third-grade math curriculum within a district, the practical population of hirable teachers is not affected. The performance bounds of the change in that third-grade math curriculum are informed by the practical population of hirable teachers, which therefore behaves as a constant on those curricular changes.)
The fundamental issue is that the expected output of such complex systems may be beyond the bounds of the inputs. The system itself (the “processes” or “curriculum” or whatever) may be adequately constructed to produce the upper bound of those expectations, but the inputs aren’t available in adequate quality and/or quantity.
The obvious subsequent postulate would seem rather explosive: no country produces a sufficient quantity of teachers to achieve the education performance that country desires. Performance has an upper bound defined in part by the nation’s population. The social objective of highly scaled (“universal”), quality education simply exceeds the upper bound of the inputs.
But really, this is a minor observation. The postulate gets more interesting with even more complex systems.
Is there any country that produces a sufficient quality and/or quantity of people to successfully operate a national government? Isn’t the argument against large scale bureaucracies simply that no country (because governments more or less scale to population size) produces the necessary inputs to operate such a complex system? That the expectations of such a system will always exceed its upper bounds? That due to the population, the system’s performance will always be sub-optimal? (Always = 100% of the time.)
With democracy, we’ve developed a governance algorithm (some believe the perfect algorithm), but as that democracy is a very complex system (complexity often but not always correlating to population size), it requires inputs of data, people, funding, etc.
Is it not empirically true that most countries have governments that produce sub-optimal performance?
Is it not conceivable that most countries do not produce the necessary people to operate such a complex system?
Is it not true that democracies that don’t suffer system failure do so because of redundant systems (true redundancy and “checks and balances,” which operate as a checksum). Such redundancy is often marked as “inefficiency” (e.g. when multiple federal offices essentially cover the same issues), but perhaps such redundancy is required. Sometimes these redundancies are empirically absurd; by the 1970s, the U.S. Federal government spent billions on anti-smoking efforts while simultaneously spending billions to support tobacco farming. In such a large, complex system, such absurdities are to be expected. In fact, they aren’t signs of inefficiency but rather are the hallmarks of the redundancy required to keep a sub-optimal system from failure.
Apart from any abstract argument about “democracy” or “rights,” it seems the primary argument in favor of dictatorships is that simple machines require significantly fewer (quality/quantity) inputs, and thus the upper performance bound is theoretically much higher.
And thus the pro-democracy argument cannot rely on superior outputs but rather superior process and, almost certainly, superior stability. Simple monolithic machines can scale more quickly and achieve simple but superior outputs more quickly, but such monolithic machines can also quickly fail catastrophically. (Communism is a simple monolithic machine. Software, today, is generally not constructed this way precisely because component failure often means catastrophic failure. Remember when software used to “crash”? That really doesn’t happen much with software anymore.)
And so democracy’s success is in its internalized revolution (elections) and its distributed (though inferior) power structure. In the end, the argument against Plato’s Republic is its tendency for catastrophic failure and not its potential for success.
Our modern teleological focus is almost entirely on building perfect machines. Politics, education, and most of our complex social systems are analyzed in the context of their mechanisms, incentives, processes — the algorithms — and not the universe of practical available inputs. And thus, we conceive and attempt to build perfect machines that consistently fail.
The delusion is completed because we’re trapped in this Victorian fantasy that we can educate (and ‘improve’) the population to ever-increasing (and systematically necessary) levels. The fact is that education data demonstrates that we haven’t meaningfully ‘improved’ the U.S. K12 population since we’ve started collecting detailed general population education data (the 1960s). The delusion is further compounded by the reality that we’ve stripped from general education the one component that has traditionally been viewed as the most meaningful input for democracies: virtue. Those who dev’d the algorithm we call democracy were not so concerned with anything referenced on the Common Core. Our Founding Fathers, for example, were somewhat obsessed with virtue such that it, and not the ability to do long division, was considered a critical input.
Complex systems can be input-bounded to poor performance. The upper bounds of all other inputs may not adequately supplement the upper bound of all inputs. The fundamental issue with building the more perfect union may be the limitation of the inputs. Ironically, it’s the simple-machine Marxists and concomitant Fascists who tend to view population not as functional constant but as variable (Stalin’s “In the future, there will be fewer but better Russians”), while democracies tend to build for optimal theoretical results while ignoring the limitations of inputs. The failure of the former is evidenced in the corpse-filled fields of Stalin, Hitler, Mao. The failure of the latter is increasingly evidenced in suasive apathy, a resignation to reform-theater that perpetuates the Victorian fantasy of progress, tomorrow.
A quick visual lesson on input limitations.
That building — yes, that’s one building — is the Scottish Parliament building. It opened in 2004. Had Satan ascended to terra firma intent on defiling the God’s most cherished creations — truth and beauty — he could get no better start than to have constructed the Scottish Parliament building. Had the Scots expertly executed a plan to secure international derision and ridicule, they could have done no better than to spend £414 million for something that looks like the unsellable bits from an architectural junkyard.
I think we can all agree that the Scots are smart people — perhaps above average for all humans — and yet the Scottish Parliament building does nothing to dispel the notion that the Scots are also inveterate drunkards. How else save for excessive alcohol consumption does one explain it?
Fortunately, the Scots actually conducted an investigation. Not only was the building criminally hideous — apparently the grotesque looks far worse when the tiny scale model building is human-sized — but the Scots believed the building would cost less than £50 million.
Some summary of the investigator’s report is below, but the general explanation for this habitable junkyard is that the Scottish government bureaucracy is filled with incompetent paper-pushers who never questioned even the most transparently wrong information. According to the report, gross incompetence was everywhere. The Scottish Parliament is a monument to input limitations: even Scotland does not have enough competent people to operate a democracy, so what hope do you have? The Scottish government demonstrated in concrete and steel that if all you desire is a generally inoffensive, functional building that doesn’t drain your budget, then democratic government is unlikely to deliver for no other reason than such government requires too many competent people. Now, imagine how well they manage more complex projects — social programs, education, energy policy, etc. The results are surely worse yet not as transparent.
Presenting his report in September 2004, Lord Fraser told how he was “astonished” that year after year the ministers who were in charge were kept so much in the dark over the increases in cost estimates. He also stated that a Parliament building of sufficient scale could never have been built for less than £50m, and was “amazed” that the belief that it could be was perpetuated for so long. He believed that from at least April 2000, when MSPs commissioned the Spencely Report to decide whether the building should continue, it should have been realised that the building was bound to cost in excess of £200m. Furthermore, £150m of the final cost was wasted as a result of design delays, over-optimistic programming and uncertain authority.
Despite having only an outline design, the designers RMJM/EMBT (Scotland) Ltd stated without foundation that the building could be completed within a £50m budget. Nevertheless, these estimates were believed by officials.
The [government] was obsessed with early completion and failed to understand the impact on cost and the completion date if high-quality work and a complex building were required. In attempting to achieve early completion, the management contractor produced optimistic programmes, to which the architects were unwise to commit.
It was nice of Lord Fraser to, at least in part, quantify the cost of government input limitations (incompetence) — £150m for the one building. Now, imagine how much those same people can waste on even more complex projects (which is pretty much everything else a government does.)
There’s so much more: http://www.holyroodinquiry.org/FINAL_report/conclusions.pdf