Complex Systems Won’t Survive the Competence Crisis - Boldly Going where South Africa has gone before

Complex Systems Won’t Survive the Competence Crisis
Source / Archive
Ilya Mirnyy/Firefighter in field, 2020
At a casual glance, the recent cascades of American disasters might seem unrelated. In a span of fewer than six months in 2017, three U.S. Naval warships experienced three separate collisions resulting in 17 deaths. A year later, powerlines owned by PG&E started a wildfire that killed 85 people. The pipeline carrying almost half of the East Coast’s gasoline shut down due to a ransomware attack. Almost half a million intermodal containers sat on cargo ships unable to dock at Los Angeles ports. A train carrying thousands of tons of hazardous and flammable chemicals derailed near East Palestine, Ohio. Air Traffic Control cleared a FedEx plane to land on a runway occupied by a Southwest plane preparing to take off. Eye drops contaminated with antibiotic-resistant bacteria killed four and blinded fourteen.

While disasters like these are often front-page news, the broader connection between the disasters barely elicits any mention. America must be understood as a system of interwoven systems; the healthcare system sends a bill to a patient using the postal system, and that patient uses the mobile phone system to pay the bill with a credit card issued by the banking system. All these systems must be assumed to work for anyone to make even simple decisions. But the failure of one system has cascading consequences for all of the adjacent systems. As a consequence of escalating rates of failure, America’s complex systems are slowly collapsing.

The core issue is that changing political mores have established the systematic promotion of the unqualified and sidelining of the competent. This has continually weakened our society’s ability to manage modern systems. At its inception, it represented a break from the trend of the 1920s to the 1960s, when the direct meritocratic evaluation of competence became the norm across vast swaths of American society.

In the first decades of the twentieth century, the idea that individuals should be systematically evaluated and selected based on their ability rather than wealth, class, or political connections, led to significant changes in selection techniques at all levels of American society. The Scholastic Aptitude Test (SAT) revolutionized college admissions by allowing elite universities to find and recruit talented students from beyond the boarding schools of New England. Following the adoption of the SAT, aptitude tests such as Wonderlic (1936), Graduate Record Examination (1936), Army General Classification Test (1941), and Law School Admission Test (1948) swept the United States. Spurred on by the demands of two world wars, this system of institutional management electrified the Tennessee Valley, created the first atom bomb, invented the transistor, and put a man on the moon.

By the 1960s, the systematic selection for competence came into direct conflict with the political imperatives of the civil rights movement. During the period from 1961 to 1972, a series of Supreme Court rulings, executive orders, and laws—most critically, the Civil Rights Act of 1964—put meritocracy and the new political imperative of protected-group diversity on a collision course. Administrative law judges have accepted statistically observable disparities in outcomes between groups as prima facie evidence of illegal discrimination. The result has been clear: any time meritocracy and diversity come into direct conflict, diversity must take priority.

The resulting norms have steadily eroded institutional competency, causing America’s complex systems to fail with increasing regularity. In the language of a systems theorist, by decreasing the competency of the actors within the system, formerly stable systems have begun to experience normal accidents at a rate that is faster than the system can adapt. The prognosis is harsh but clear: either selection for competence will return or America will experience devolution to more primitive forms of civilization and loss of geopolitical power.

From Meritocracy to Diversity

The first domino to fall as Civil Rights-era policies took effect was the quantitative evaluation of competency by employers using straightforward cognitive batteries. While some tests are still legally used in hiring today, several high-profile enforcement actions against employers caused a wholesale change in the tools customarily usable by employers to screen for ability.

After the early 1970s, employers responded by shifting from directly testing for ability to using the next best thing: a degree from a highly-selective university. By pushing the selection challenge to the college admissions offices, selective employers did two things: they reduced their risk of lawsuits and they turned the U.S. college application process into a high-stakes war of all against all. Admission to Harvard would be a golden ticket to join the professional managerial class, while mere admission to a state school could mean a struggle to remain in the middle class.

This outsourcing did not stave off the ideological change for long. Within the system of political imperatives now dominant in all major U.S. organizations, diversity must be prioritized even if there is a price in competency. The definition of diversity varies by industry and geography. In elite universities, diversity means black, indigenous, or Hispanic. In California, Indian women are diverse but Indian men are not. When selecting corporate board members, diversity means “anyone who is not a straight white man.” The legally protected and politically enforced nature of this imperative renders an open dialogue nearly impossible.

However diversity itself is defined, most policy on the matter is based on a simple premise: since all groups are identical in talent, any unbiased process must produce the same group proportions as the general population, and therefore, processes that produce disproportionate outcomes must be biased. Prestigious journals like Harvard Business Review are the first to summarize and parrot these views, which then flow down to reporting by mass media organizations like Bloomberg Businessweek. Soon, it joins McKinsey’s “best practices” list and becomes instantiated in corporate policies.

Unlike accounting policies, which emanate from the Financial Accounting Standards Board and are then implemented by Chief Financial Officers, the diversity push emanates inside of organizations from multiple power centers, each of which joins in for independent reasons. CEOs push diversity policies primarily to please board members and increase their status. Human Resources (HR) professionals push diversity policies primarily to avoid anti-discrimination lawsuits. Business development teams push diversity to win additional business from diversity-sensitive clients (e.g. government agencies). Employee Resource Groups (ERGs), such as the Black Googler Network, push diversity to help their in-group in hiring and promotion decisions.

Diversity in Theory and Practice

In police academies around the country, new recruits are taught to apply an escalation of force algorithm with non-compliant subjects: “Ask, Tell, Make.” The idea behind “Ask, Tell, Make” is to apply the least amount of force necessary to achieve the desired level of compliance. This is the means by which police power, which is ultimately backed by significant coercive force, can maintain an appearance of voluntary compliance and soft-handedness. Similarly, the power centers inside U.S. institutions apply a variant of “Ask, Tell, Make” to achieve diversity in their respective organizations.

The first tactics for implementing diversity imperatives are the “Ask” tactics. These simply ask all the members of the organization to end bias. At this stage, the policies seem so reasonable and fair that there will rarely be much pushback. Best practices such as slating guidelines are a common tool at this stage. Slating guidelines require that every hiring process must include a certain number and type of diverse candidates for every job opening. Structured interviews are another best practice that requires interviewers to stick with a script to minimize the chance of uncovering commonalities between the interviewer and interviewee that might introduce bias. Often HR will become involved in the hiring process, specifically asking the hiring manager to defend their choice not to hire a diverse candidate. Because the wrong answer could result in shaming, loss of advancement opportunities, or even termination, the hiring manager can often be persuaded to prioritize diversity over competence.

Within specialized professional services companies, senior-level recruiting will occasionally result in a resume collection where not a single diverse candidate meets the minimum specifications of the job. This is a terrible outcome for the hiring manager as it attracts negative attention from HR. At this point, firms will often retain an executive search agency that focuses on exclusively diverse candidates. When that does not result in sufficient diversity, roles will often have their requirements diluted to increase the pool of diverse candidates.

For example, within hedge funds, the ideal entry-level candidate might be an experienced former investment banker who went to a top MBA program. This preferred pedigree sets a minimum bar for both competence and work ethic. This first-pass filter enormously winnows the field of underrepresented candidates. To relax requirements for diversity’s sake, this will be diluted in various ways. First, the work experience might be stripped. Next, the role gets offered to MBA interns. Finally, fresh undergraduates are hired into the analyst role. Dilution works not just because of the larger field of candidates it allows for but also because the Harvard Admission Office of 2019 is even more focused on certain kinds of diversity than the Harvard Admission Office of 2011 was.

This dilution is not costless; fewer data points result in a wider range of outcomes and increase the risk of a bad hire. All bad hires are costly but bad hires that are diverse are even worse. The risk of a wrongful termination lawsuit either draws out the termination process for diverse hires or results in the firm adjusting by giving them harmless busy work until they leave of their own volition—either way, a terrible outcome for the organizations which hired them.

If these “Ask” tactics do not achieve enough diversity, the next step in the escalation is to attach carrots and sticks to directly tell decision-makers to increase the diversity of the organization. This is the point at which the goals of diversity and competence truly begin displaying significant tension between each other. The first step is the implementation of Key Performance Indicators (KPI) linked to diversity for all managers. Diversity KPIs are a tool to embarrass leaders and teams that are not meeting their diversity targets. Given that most organizations are hierarchical and pyramidal, combined with the fact that America was much whiter 50 years ago than it was today, it is unsurprising that senior leadership teams are less diverse than America as a whole—and, more pertinently, than their own junior teams.

The combination of a pyramid-shaped org chart and a senior leadership team where white men often make up 80 percent or more of the team means that the imposition of an aggressive KPI sends a message to the layer below them: no white man in middle management will likely ever see a promotion as long as they remain in the organization. This is never expressed verbally. Rather, those overlooked figure it out as they are passed over continually for less competent but more diverse colleagues. The result is demoralization, disengagement, and over time, departure.

While all the aforementioned techniques fall into the broad category of affirmative action, they primarily result in slightly tilting the scale toward diverse candidates. The next step is simply holding different groups to different standards. Within academia, the recently filed Students for Fair Admissions v. President and Fellows of Harvard College lawsuit leveraged data to show the extent to which Harvard penalizes Asian and white applicants to help black and Hispanic applicants. The UC System, despite formally being forbidden from practicing affirmative action by Proposition 209, uses a tool called “comprehensive admission” to accomplish the same goal.

The latest technique, which was recently brought to light, shows UC admissions offices using the applicants’ high schools as a proxy for race to achieve their desired goal. Heavily Asian high schools such as Arcadia—which is 68 percent Asian—saw their UC-San Diego acceptance rate cut from 37 percent to 13 percent while the 99-percent-Hispanic Garfield High School saw its UC-San Diego acceptance rate rise from 29 percent to 65 percent.

The preference for diversity at the college faculty level is similarly strong. Jessica Nordell’s End of Bias: A Beginning heralded MIT’s efforts to increase the gender diversity of its engineering department: “When applications came in, the Dean of Engineering personally reviewed every one from a woman. If departments turned down a good candidate, they had to explain why.”

When this was not enough, MIT increased its gender diversity by simply offering jobs to previously rejected female candidates. While no university will admit to letting standards slip for the sake of diversity, no one has offered a serious argument why the new processes produce higher or even equivalent quality faculty as opposed to simply more diverse faculty. The extreme preference for diversity in academia today explains much of the phenomenon of professors identifying with a minor fraction of their ancestry or even making it up entirely.

During COVID-19, the difficulty of in-person testing and online proctoring created a new mechanism to push diversity at the expense of competency: the gradual but systematic elimination of standardized tests as a barrier to admission to universities and graduate schools. Today, the majority of U.S. colleges have either stopped requiring SAT/ACT scores, no longer require them for students in the top 10 percent of their class, or will no longer consider them. Several elite law schools, including Harvard Law School, no longer require the LSAT as of 2023. With thousands of unqualified law students headed to a bar exam that they are unlikely to pass, the National Conference of Bar Examiners is already planning to dilute the bar exam under the “NextGen” plan. Specifically, “eliminat[ing] any aspects of our exams that could contribute to performance disparities” will almost definitionally reduce the degree to which the exam tests for competency.

Similarly, standards used to select doctors have also been weakened to promote diversity. Programs such as the City College of New York’s BS/MD program have eliminated the MCAT requirement. With the SAT now optional, new candidates can go straight from high school to the United States Medical Licensing Examination Step 1 exam in medical school without having gone through any rigorous standardized test whose score can be compared across schools. Step 1 scores were historically the most significant factor in the National Residency Matching Program, which pairs soon-to-be doctors with their future residency training programs. Because Step 1 scores serve as a barrier to increasing diversity, they have been made pass/fail. A handful of doctors are speaking out about the dangers of picking doctors based on factors other than competency but most either explicitly prefer diversity or else stay silent, concerned about the career-ending repercussions of pointing out the obvious.

When even carrot and stick incentives and the removal of standards do not achieve enough diversity, the end game is to simply make decision-makers comply. “Make” has two preferred implementations: one is widely discussed and the other is, for obvious reasons, never disclosed publicly. The first method of implementation is the application of quotas. Quotas or set-asides require the reservation of admissions slots, jobs, contracts, board seats, or other scarce goods for women and members of favored minority groups. Government contracts and supplier agreements are explicitly awarded to firms that have acronyms such as SB, WBE, MBE, DBE, SDB, VOSB, SDVOSB, WOSB, HUB, and 8(a).

Within large employers and government contractors, quotas are used for both hiring and promotions, requiring specific percentages of hiring or promotions to be reserved for favored groups. During the summer of 2020, the CEO of Wells Fargo, was publicly shamed after his memo blaming the underrepresentation of black senior leaders on a “very limited pool” of black talent was leaked to Reuters. Less than a month later, the bank publicly pledged to reserve 12 percent of leadership positions for black candidates and began tying executive compensation to reaching diversity goals. In 2022, Goldman Sachs extended quotas to the capital markets by adopting a policy to avoid underwriting IPOs of firms without at least two board members that are not straight white men.

When diversity still refuses to rise to acceptable levels, the remaining solution is the direct exclusion of non-diverse candidates. While public support for anti-discrimination laws and equal opportunity laws is high, public support for affirmative action and quotas is decidedly mixed. Hardline views such as those expressed in author Ijeoma Oluo’s Mediocre: The Dangerous Legacy of White Male America—namely that any white man in a position of power perpetuates a system of white male domination”—are still considered extreme, even within U.S. progressive circles.

As such, when explicit exclusion is used to eliminate groups like white men from selection processes, it is done subtly. Managers are told to sequester all the resumes from “non-diverse” candidates—that is, white males. These resumes are discarded and the candidates are sent emails politely telling them that “other candidates were a better fit.” While some so-called “reverse discrimination” lawsuits have been filed, most of these policies go unreported. The reasons are straightforward; even in 2023, screening out all white men is not de jure legal. Moreover, any member of the professional managerial class who witnesses and reports discrimination against white men will never work in their field again.

Even anonymous whistleblowing is likely to be rare. To imagine why, suppose incontrovertible evidence was produced that one’s employer was explicitly excluding white male candidates, and a lawsuit was filed. The employer’s reputation and the reputation of all the employees there, including the white men still working there, would be tarnished. That said, we can expect to see more lawsuits from men who feel they have little to lose.

This “Ask, Tell, Make” framework, under various descriptions, is the method by which individuals with a vested interest in more diversity push their organizations toward their preferred outcome. Force begins requesting modest changes to recruiting to make it “more fair.” Force ends with the heavy-handed application of quotas and even exclusion. The American system is not a monolith, however, which means that the strength of the push and its effects on competency is not distributed evenly.

Competency Is Declining From the Core Outwards

Think of the American system as a series of concentric rings with the government at the center. Directly surrounding that are the organizations that receive government funds, then the nonprofits that influence and are subject to policy, and finally business at the periphery. Since the era of the Manhattan Project and the Space Race, the state capacity of the federal government has been declining almost monotonically.

While this has occurred for a multitude of reasons, the steel girders supporting the competency of the federal government were the first to be exposed to the saltwater of the Civil Rights Act and related executive orders. Government agencies, which are in charge of overseeing all the other systems, have seen the quality of their human capital decline tremendously since the 1960s. While the damage to an agency like the Department of Agriculture may have long-term deadly consequences, the most immediate danger is at safety-critical agencies like the Federal Aviation Administration (FAA).

The Air Traffic Control (ATC) system used in the U.S. relies on an intricate dance of visual or radar observation, transponders, and radio communication, all with the incredible challenge of keeping thousands of simultaneously moving planes from ever crashing into each other. Since air controlling is one of the only jobs that pays more than $100,000 per year and does not require a college diploma, it has been a popular career choice for individuals without a degree who nonetheless have an exceptionally good memory, attention span, visuospatial awareness, and logical skills. The Air Traffic Selection and Training (AT-SAT) Exam, a standardized test of those critical skills, was historically the primary barrier to entry for air controllers. As a consequence of the AT-SAT, as well as a preference for veterans with former air controller experience, 83 percent of air controllers in the U.S. were white men as of 2014.

That year, the FAA added a Biographical Questionnaire (BQ) to the screening process to tilt the applicant pool toward diverse candidates. Facing pushback in the courts from well-qualified candidates who were screened out, the FAA quietly backed away from the BQ and adopted a new exam, the Air Traffic Skills Assessment (ATSA). While the ATSA includes some questions similar to those of the BQ, it restored the test’s focus on core air traffic skills. The importance of highly-skilled air controllers was made clear in the most deadly air disaster in history, the 1977 Tenerife incident. Two planes, one taking off and one taxiing, collided on the runway due to confusion between the captain of KLM 4805 and the Tenerife ATC. The crash, which killed 583 people, resulted in sweeping changes in aviation safety culture.

Recently, the tremendous U.S. record for air safety established since the 1970s has been fraying at the edges. The first three months of 2023 saw nine near-miss incidents at U.S. airports, one with two planes coming within 100 feet of colliding. This terrifying uptick from years prior resulted in the FAA and NTSB convening safety summits in March and May, respectively. Whether they dared to discuss root causes seems unlikely.

Given the sheer size of the U.S. military in both manpower and budget dollars, it should not come as a surprise that the diversity push has also affected the readiness of this institution. Following three completely avoidable collisions of U.S. Navy warships in 2017 and a fire in 2020 that resulted in the scuttling of USS Bonhomme Richard, a $750 million amphibious assault craft, two retired marines conducted off-the-record interviews with 77 current and retired Navy officers. One recurring theme was the prioritization of diversity training over ship handling and warfighting preparedness. Many of them openly admit that, given current issues, the U.S. would likely lose an open naval engagement with China. Instead of taking the criticism to heart, the Navy commissioned “Task Force One Navy,” which recommended deemphasizing or eliminating meritocratic tests like the Officer Aptitude Rating to boost diversity. Absent an existential challenge, U.S. military preparedness is likely to continue to degrade.

The decline in the capacity of government contractors is likewise obvious, with the largest contractors being the most directly impacted. The five largest contractors—Lockheed Martin, Boeing, General Dynamics, Raytheon Company, and Northrop Grumman—will all struggle to maintain competency in the coming years.

Boeing, one of only two firms globally capable of mass-producing large airliners, has a particularly striking crisis unfolding in its institutional culture. Shortly after releasing the 737 MAX, 346 people died in two nearly identical 737 MAX crashes in Indonesia and Ethiopia. The cause of the crashes was a complex interaction between design choices, cost-cutting led by MBAs, FAA issues, the MCAS flight-control system, a faulty sensor, and pilot training. Meanwhile, on the defense side of the business, Boeing’s new fuel tanker, the KC-46A Pegasus is years behind on deliveries due to serious technical flaws with the fueling system along with multiple cases of Foreign Object Debris left inside the plane during construction: tools, a red plastic cap, and in one case, even trash. Between the issues at ATC and Boeing, damage to the U.S.’s phenomenal aviation safety record seems almost inevitable.

After government contractors, the next-most-affected class of institutions are nonprofit organizations. They are entrapped by the government whose policies they are subject to and trying to influence, the opinions of their donor base, and lack of any profit motive. The lifeblood of nonprofits is access to capital, either directly in the form of government grants or through donations that are deemed tax-deductible. Accessing federal monies means being subject to the full weight of U.S. diversity rules and regulations. Nonprofits are generally governed by boards whose members tend to overlap with the list of major donors. Because advocacy for diversity and board memberships are both high-status positions, unsurprisingly board members tend to voice favorable opinions of diversity, and those opinions flow downstream to the organizations they oversee.

Nonprofits—including universities, charities, and foundations—exist in an overlapping ecosystem with journalism, with individuals tending to freely circulate between the four. The activities of nonprofits are bound up in the same discourses shaped by current news and academic research, with all four reflecting the same general ideological consensus. Finally, lacking the profit motive, the decision-making processes of nonprofits are influenced by what will affect the status of the individuals within those organizations rather than what will affect profits. Within nonprofits, the cost of incompetent staffers is borne by “stakeholders,” rather than any one individual.

While all businesses subject to federal law must prioritize diversity over competency at some level, the problem is worse at publicly-traded corporations for reasons both obvious and subtle. The obvious reason is that larger companies present larger targets for EEOC actions and discrimination lawsuits with hundreds of millions of dollars at stake. Corporations have logically responded by hiring large teams of HR professionals to preempt such lawsuits. Over the past several decades, HR has evolved from simply overseeing onboarding to involvement in every aspect of hiring, promotions, and firings, seeing them all through a political and regulatory lens.

The more subtle reason for pressure within publicly-traded companies is that they require ongoing relationships with a spiderweb of banks, credit ratings agencies, proxy advisory services, and most importantly, investors. Given that the loss of access to capital is an immediate death sentence for most businesses, the CEOs of publicly-traded companies tend to push diversity over competency even when the decline in firm performance is clear. CEOs would likely rather trade a small drag on profits margins than a potentially career-ending scandal from pushing back.

Whereas publicly-traded corporations nearly uniformly push diversity, privately-held businesses vary tremendously based on the views of their owners. Partnerships such as the Big Four accounting firms and top-tier management consultancies are high-status. High-status firms must regularly proclaim extensive support for diversity. While the firms tend to be highly selective, partnerships whose leadership is overwhelmingly white and male have generally capitulated to the zeitgeist and are cutting standards to hit targets. Firms often manage around this by hiring for diversity and then putting diversity hires into roles where they are the least likely to damage the firm or the brand. Somewhat counterintuitively, firms with diverse founders are often highly meritocratic, as the structure harnesses the founder’s desire to make money and shields them from criticism on diversity issues.

The most notable example of a diverse meritocracy is Vista Equity Partners, the large private equity firm founded by Robert F. Smith, America’s wealthiest black man. Robert F. Smith is one of the most vocal advocates for and philanthropists to historically black U.S. colleges and universities. It would be reasonable to expect Vista to prioritize diversity over competency in its portfolio companies. However, Vista has instead been profiled for giving all portfolio company management teams the Criteria Cognitive Aptitude Test and ruthlessly culling low-performers. Given the amount of value to be created by promoting the best people into leadership roles of their portfolio companies, one might imagine this to be low-hanging fruit for the rest of private equity, yet Vista is an outlier. Why Vista can apply the CCAT without a public outcry is obvious.

The other firms that tend to still focus on competency are those that are small and private. Such firms have two key advantages: they fall below the fifteen-employee threshold for the most onerous EEOC rules and the owner can usually directly observe the performance of everyone inside the organization. Within small firms, underperformance is usually obvious. Tech startups, being both small and private, would seem to have the right structure to prioritize competency.

The American System Is Cracking

Promoting diversity over competency does not simply affect new hires and promotion decisions. It also affects the people already working inside of America’s systems. Morale and competency inside U.S. organizations are declining. Those who understand that the new system makes it hard or impossible for them to advance are demoralized, affecting their performance. Even individuals poised to benefit from diversity preferences notice that better people are being passed over and the average quality of their team is declining. High performers want to be on a high-performing team. When the priorities of their organizations shift away from performance, high performers respond negatively.

This effect was likely seen in a recent paper by McDonald, Keeves, and Westphal. The paper points out that white male senior leaders reduce their engagement following the appointment of a minority CEO. While it is possible that author Ijeoma Oluo is correct, and that white men have so much unconscious bias raging inside of them that the appointment of a diverse CEO sends them into a tailspin of resentment, there is another more plausible explanation. When boards choose diverse CEOs to make a political statement, high performers who see an organization shifting away from valuing honest performance respond by disengaging.

Some demoralized employees—like James Damore in his now-famous essay, “Google’s Ideological Echo Chamber”—will directly push back against pro-diversity arguments. Like James, they will be fired. Older, demoralized workers, especially those who are mere years from retirement, are unlikely to point out the decline in competency and risk it costing them their jobs. Those who have a large enough nest egg may simply retire to avoid having to deal with the indignity of having to attend another Inclusive Leadership seminar.

As older men with tacit knowledge either retire or are pushed out, the burden of maintaining America’s complex systems will fall on the young. Lower-performing young men angry at the toxic mix of affirmative action (hurting their chances of admission to a “good school”) and credentialism (limiting the “good jobs” to graduates of “good schools”) are turning their backs on college and white-collar work altogether.

This is the continuation of a trend that began over a decade ago. High-performing young men will either collaborate, coast, or downshift by leaving high-status employment altogether. Collaborators will embrace “allyship” to attempt to bolster their chances of getting promoted. Coasters realize that they need to work just slightly harder than the worst individual on their team. Their shirking is likely to go unnoticed and they are unlikely to feel enough emotional connection to the organization to raise alarm when critical mistakes are being made. The combination of new employees hired for diversity, not competence, and the declining engagement of the highly competent sets the stage for failures of increasing frequency and magnitude.

The modern U.S. is a system of systems interacting together in intricate ways. All these complex systems are simply assumed to work. In February of 2021, cold weather in Texas caused shutdowns at unwinterized natural gas power plants. The failure rippled through the systems with interlocking dependencies. As a result, 246 people died. In straightforward work, declining competency means that things happen more slowly, and products are lower quality or more expensive. In complex systems, declining competency results in catastrophic failures.

To understand why, one must understand the concept of a “normal accident.” In 1984, Charles Perrow, a Yale sociologist, published the book, Normal Accidents: Living With High-Risk Technologies. In this book, Perrow lays out the theory of normal accidents: when you have systems that are both complex and tightly coupled, catastrophic failures are unavoidable and cannot simply be designed around. In this context, a complex system is one that has many components that all need to interact in a specified way to produce the desired outcome. Complex systems often have relationships that are nonlinear and contain feedback loops. Tightly-coupled systems are those whose components need to move together precisely or in a precise sequence.

The 1979 Three Mile Island Accident was used as a case study: a relatively minor blockage of a water filter led to a cascading series of malfunctions that culminated in a partial meltdown. In A Demon of Our Own Design, author Richard Bookstaber added two key contributions to Perrow’s theory: first, that it applies to financial markets, and second, that regulation intended to fix the problem may make it worse.

The biggest shortcoming of the theory is that it takes competency as a given. The idea that competent organizations can devolve to a level where the risk of normal accidents becomes unacceptably high is barely addressed. In other words, rather than being taken as absolutes, complexity and tightness should be understood to be relative to the functionality of the people and systems that are managing them. The U.S. has embraced a novel question: what happens when the men who built the complex systems our society relies on cease contributing and are replaced by people who were chosen for reasons other than competency?

The answer is clear: catastrophic normal accidents will happen with increasing regularity. While each failure is officially seen as a separate issue to be fixed with small patches, the reality is that the whole system is seeing failures at an accelerating rate, which will lead in turn to the failure of other systems. In the case of the Camp Fire that killed 85 people, PG&E fired its CEO, filed Chapter 11, and restructured. The system’s response has been to turn off the electricity and raise wildfire insurance premiums. This has resulted in very little reflection. The more recent coronavirus pandemic was another teachable moment. What started just three years ago with a novel respiratory virus has caused a financial crisis, a bubble, soaring inflation, and now a banking crisis in rapid succession.

Patching the specific failure mode is simultaneously too slow and induces unexpected consequences. Cascading failures overwhelm the capabilities of the system to react. 20 years ago, a software bug caused a poorly-managed local outage that led to a blackout that knocked out power to 55 million people and caused 100 deaths. Utilities were able to restore power to all 55 million people in only four days. It is unclear if they could do the same today. U.S. cities would look very different if they remained without power for even two weeks, especially if other obstructions unfolded. What if emergency supplies sat on trains immobilized by fuel shortages due to the aforementioned pipeline shutdown? The preference for diversity over competency has made our system of systems dangerously fragile.

Americans living today are the inheritors of systems that created the highest standard of living in human history. Rather than protecting the competency that made those systems possible, the modern preference for diversity has attenuated meritocratic evaluation at all levels of American society. Given the damage already done to competence and morale combined with the natural exodus of baby boomers with decades worth of tacit knowledge, the biggest challenge of the coming decades might simply be maintaining the systems we have today.

The path of least resistance will be the devolution of complex systems and the reduction in the quality of life that entails. For the typical resident in a second-tier city in Mexico, Brazil, or South Africa, power outages are not uncommon, tap water is probably not safe to drink, and hospital-associated infections are common and often fatal. Absent a step change in the quality of American governance and a renewed culture of excellence, they prefigure the country’s future.
 
Assuming I'm the CEO of a fortune 500 company, what am I to do? How do I hire based on competency without being sued for discrimination?
One thing you could do is stop sponsoring H-1B visas. That would instantly boost the average competency of your IT department.

Another thing is to only hire people who already live near your offices. It's not illegal to not seek out people from out-of-state and pay them to move to you.
 
There are a lot of science fiction that's like that. I should one day try to do a dive through the older stuff as they have predictiobs we could've never seen coming. The Veldt was a required reading and I consider its message important, with so many people letting the internet raise their children and they children going insane.
I wish to request a copy of the Betonhaus homeschool reading list.
 
I wish to request a copy of the Betonhaus homeschool reading list.
Fuck if I remember most of them. I vaguely recall us reading these kid detective books where the boy is hearing or experiencing a crime and situation and then solved it, and we're supposed to figure out from the story how he solved it before turning the page to the explanation. Like one is his dad is a cop trying to find people who robbed a bank and come across a hitchhiker who's been in the heat all day that offers to give them directions, and the boy realizes that the hitchhiker is one of them because the hitchhiker had chocolate and broke off a piece to share.
 
Double your pleasure(horror?), folks.
Yeah, about that whole "trust the science" thing...
Not gonna bother with a new thread, its the same site.
ASH MILTON DECEMBER 19, 2022 ARTICLES

The Institutions of Science With Lord Martin Rees​

This interview appears in print in PALLADIUM 08: Scientific Authority. To receive your copy of Palladium’s quarterly print edition, subscribe here.
Surrounded by the English wilderness in far-west Shropshire, a young Martin Rees discovered the cosmos. Born in 1942, he began his education in a school run by his parents in a repurposed Victorian mansion. Today, he recalls this home sparking childhood questions—like why high tides vary on the coast—that would only be answered when he began his scientific studies. After a few years at a boarding school away from home, he entered Cambridge University, where he discovered astrophysics and has since spent much of his career. From 2004 until 2012, he was Master of Trinity College.
Rees was fortunate to enter his field at a time when it was particularly creative and generative. Radio astronomy from Cambridge and elsewhere was generating evidence for the Big Bang theory of the universe’s origins. Relativistic astrophysics was delivering exciting new models of black holes and quasars. Rees’s work has helped to disprove Steady State theory, update models on how galaxies are formed, and enabled us to better understand the earliest conditions of the universe. Throughout his decades-long career, he has interacted and collaborated with luminaries like Joseph Rotblat, Freeman Dyson, and Sir Roger Penrose.
In addition to his research, Rees has dedicated much of his career to bringing science to the general public. He has written numerous books, lectured around the world, and advised governments and other institutions. He was appointed Astronomer Royal in 1995 and became a Life Peer in the House of Lords in 2005. From 2005 to 2010, he served as President of the Royal Society, first established in 1660 and today the oldest scientific academy in continuous existence.
Rees’s latest book is If Science Is to Save Us. Looking to predecessors like the Great Exhibition and the Pugwash Conferences, he invokes an optimistic vision of science as a force that can improve society. But with increasing global turmoil, and with institutional obstacles holding back rising generations of scientists, realizing that vision is by no means guaranteed.

The Early Years of Science

In your new book, you take us through some of the early history of science. How would you characterize this earlier period when science is getting established as a discipline?
Lord Martin Rees:
Well, back at that time, science wasn’t a profession. The Royal Society, until the mid-nineteenth century, was a place where most of the scientists were amateurs, though some of the leading ones were real polymaths. What they had in common was that they were wealthy enough to be independent, and they were lucky enough to have an education.
And so it was only after the mid-nineteenth century that science became professionalized as an academic subject. If I look at Cambridge University, the idea of the undergraduate major in science as a subject to study—in American jargon—didn’t take root until the middle of the nineteenth century.
These days, everyone thinks in terms of scientific ideas being the impetus for new technology. But although that may be to some extent true now—there’s a symbiosis between them, anyway—in the earlier days there was very little connection. With the invention of the steam engine, and of all the technology involved in its construction, they wouldn’t have thought they were using science at all.
So technology, even sophisticated technology like shipbuilding, was in no sense based on science in the way that we would say that modern technology is based on science.
You mention Charles Babbage in your book, who in the nineteenth century attacked the Royal Society. It sounds like he thought math and physics had become stagnant. What was his critique? And as a former president of the Royal Society, do you think there was anything in it?
He was thinking about the UK. And it was certainly true in the UK that, throughout the eighteenth century, even the leading universities were in a pretty low state—no intellectual standards and so on. So I think the universities did, as he implies, sink into a torpor in the eighteenth century and in the early nineteenth century.
And indeed, he was quite right about the Royal Society. And what happened in the nineteenth century in England was that an interest in science and technology was revived in a big way. One thinks of the Great Exhibition of 1851, which was an amazing show of all these technological achievements. And the first half of the nineteenth century saw the foundation of specialized societies: the Linnean Society, the Geological Society, the Royal Astronomical Society, and many others. And also the British Association for the Advancement of Science, which had a sort of outreach program.
And I think those were founded mainly because there was growing public interest in science, but also because the Royal Society itself was not being at all effective.
What were the communities behind these generative, productive periods of science like?
It was all fairly amateurish. In Germany, the foundation of research universities is attributed to Alexander von Humboldt in about 1820. Thereafter, the Germans had the idea of a university where you would teach students, but also have people doing research. And of course, as I say in the book, that’s the model which we now have in the UK and in the US. And now, China has taken it up too.
But in mainland Europe—indeed, even in Germany—they don’t quite have that model now. Most of the best research in Germany is done at Max Planck Institutes, which are severed from universities. And in France, it’s done by people supported by the CNRS [the French National Centre for Scientific Research] and civil servants. So Germany developed an institutional structure for science that was independent of academies in the 1820s. And other countries, including the UK, took longer to develop the sort of scientific components based in universities.
Incidentally, I would recommend, if you don’t know it, The Age of Wonder by Richard Holmes, which is a fascinating book about science and culture in the late eighteenth and early nineteenth centuries. But the point here is that there was a sort of culture of science linked to the culture of the humanities. I mean, Shelley and Wordsworth were interested in science.
The Great Exhibition is part of this interesting turn that happened in the middle of the nineteenth century, the idea that science needed to have more public engagement. Until that point, as you’ve said, these were very restricted groups. The key thing was how peers within the scientific community received someone’s work. Why did science take this turn toward public recognition?
Well, I think it was a response to wider public interest. The early people who took an interest in science were limited to a small elite, educationally and financially. Whereas in the nineteenth century, there were lots of local organizations for following science and a far wider interest in it. And that’s what the British Association for the Advancement of Science (BA) met through its meetings around the country. They had 3000 people on a beach listening to a lecture by Adam Sedgwick, the geologist, in 1837.
And there were bodies like that because science was becoming more advanced and more technical. So there was a motive, from the mid-nineteenth century onwards, to actually have organized teaching in those subjects. And as I said, science only became part of the Cambridge University curriculum for undergraduates in the mid-nineteenth century—mathematics had done so already. And of course, this led to career openings for people to teach at these places, and also to do research.

The Challenges of Scientific Authority

You’re an admirer of the physicist Joseph Rotblat. You even met him later in his life. He left the Manhattan Project on moral grounds, and later ran the Pugwash conferences to promote nuclear disarmament. He also got political operators like Kissinger on board with his projects.
Rotblat seems to have been quite successful in his initiatives. We could contrast him with Leo Szilard, another physicist who was working on the same issues but got sidelined by politicians. What made Rotblat so effective as a scientific advisor in this period?

For the first half of the century, Germany and Britain were the major creators of science, with most of the origins of quantum theory, Einstein, and all that coming from Europe. America only became a leading scientific power after World War II. And that’s partly stimulated by the war itself, of course.
In World War II, even more than in World War I, it was realized that science was crucial for weaponry. And of course, the most conspicuous development was the atomic bomb, which was an extraordinary collective achievement of science. And then, of course, we had the space program. Politicians were aware of the dependence of our civilization on having good scientists around. And especially after World War II, there was great optimism about science, because science had clearly been crucial in the war’s outcome.
I think one shouldn’t overemphasize the role of Rotblat and Pugwash, and all that. But the reason I wrote about them in my book was that the first group of scientists to confront the real ethical dilemmas of science in a big way were, of course, the nuclear physicists who built the bomb. And I got to know several of them in their later years, in the 1970s, including people like Hans Bethe. Many of them were exceptional scientists, but also people of some ethical sensibility.
And they’re the people who went back to civilian life. But they had an obligation to try and optimally harness the forces they’d helped unleash during the war. And the Pugwash conference group founded by Rotblat and Bertrand Russell was an example of this.
The other feature of science, of course, is that it’s always been international. Mendeleev had the periodic table in the late nineteenth century, and there was lots of contact with the Germans in chemistry. So even when the world was divided during the Cold War, scientists wanted to keep in contact. And that was the motive for channels like the Pugwash conferences, where scientists from both sides who trusted and respected each other could get together. And bodies like that were, in fact, especially important when there were few other channels between the two sides. I think they were less important from the 1970s on, because there were a lot more channels after that.
The nuclear issue was not the only important one. But I think the Pugwash movement was important in getting scientists on both sides together and offering a back channel to governments rather than direct channels. And from what I’ve read about the history, which was before my time, they had a role in easing the path to the Anti-Ballistic Missile Treaty and all that.
What was the value of Pugwash to the political side? It doesn’t seem like they only found them to be useful messengers. Rotblat and his allies actually got important people on board with trying to curtail the nuclear threat. Why was the group so effective at that?
Well, they got politicians involved after the politicians’ retirement. There was this group, the Gang of Four, that was trying to move towards zero nuclear weapons. That was Henry Kissinger, George Schultz, William Perry, and Sam Nunn. And there were other groups like that. And of course, Robert McNamara, who was a great hawk at the time of Vietnam, spoke in his later years about the excessive risks that were taken during the Cuban Missile Crisis in 1962. And he said that the U.S. was lucky as well as wise that it didn’t lead to a nuclear confrontation.
So many of these people who’d been at the center of these decisions realized in their later years just how lucky they’d been, and how great the danger that they managed to avoid was. And therefore, they were prepared to engage with these groups at that time.
It’s hard to strike the balance in a situation where the science is uncertain but the state has to take action. You mentioned that when there is scientific controversy in a political question, it may be better for a scientist not to invoke their scientific authority. They should engage that point in public as a citizen. What is the difference between engaging as a citizen versus as a scientist?
Geoengineering is an interesting example. That’s an issue where everyone accepts it’s a sort of Plan B if climate change does turn out to be drastic and we don’t cut CO2 emissions. In fact, there was a campaign in Canada that was against any experiments that were at all relevant to geoengineering, whereas other people said, “well, let’s at least do some experiments so that we know it would work if we need it.” So this is a genuine controversial debate, and I think it’s quite right that it should be.
One point which I make is that many of these decisions which benefit future generations are, of course, doing that at the expense of money we could spend straightaway. So there’s a tension between instant gratification and doing something which will benefit future generations. Of course, the issue here is that politicians have a focus on doing what’s right for their own constituents before the next election. And so they’re not the kind of people who will think very long-term unless their voters are happy with it. That’s why in my book, I quoted the senior European politician Jean-Claude Juncker: “We know what to do, but we don’t know how to get re-elected when we’ve done it.” And he was thinking of the measures you need to stem serious climate change in the far future.
The other point is that because politicians normally have a difficult agenda of short-term urgent problems—especially at the moment—they won’t take very much notice of the scientific adviser who tells them to worry about these long-term questions. They’ve got urgent things on their mind. And that’s just human psychology. That’s why one thing I noted in my book as being very important to change is public opinion—the opinion of voters. Because if the voters can be made to think long-term and to care about the world their grandchildren will live in, then the politicians will respond. And that’s why I say that we should appreciate all the demonstrations that are happening now in favor of action to prevent climate change.
I admire what I call the Disparate Quartet of charismatic individuals: Pope Francis, David Attenborough, Bill Gates, and Greta Thunberg—all very different from each other, but they’ve all done a great deal to make the public more aware of climate change. And they’ve affected the willingness to have legislation that favors clean energy. This, I think, is important because they change the opinion of voters, and the voters will then be happy. And then the politicians may make these long-term decisions. So that’s an example where public opinion is important.
Another example is the pollution of the oceans. When there was a bill passed in the UK to prohibit non-reusable plastic drinking straws and the like, that was because of David Attenborough’s programs that showed people the stomachs of fish full of bits of plastic. There’s also a scene [reported by Attenborough] of an albatross who returns to his nest and coughs up lumps of plastic for its brood. And that is an iconic picture, that millions in the UK saw. It became rather like the polar bear on a melting iceberg, you know. And that made a difference. The British politician certainly wouldn’t have used any of his political capital on a regulation like this, had he not realized that lots of people now cared about ocean pollution.
And so I think the lesson there is that if the public cares about something which is long-term, then politicians will respond. That’s why it’s very important to make the public aware. It is also important that scientists should get through to the public. But in many cases—except people like Carl Sagan, who were masters of this—this is best done through intermediaries, like the four charismatic people I mentioned, who aren’t themselves scientists but people who listen to the science.
During the COVID-19 response, we saw how politics engulfed everyone—even those trying to do objective work. There was this battening of the hatches on the part of many scientists. One quote that made some waves was Dr. Fauci saying that “attacks on me are attacks on science.” One side saw scientific authority under assault, the other as cover for a political agenda.
In retrospect, do you think that battening of the hatches was necessary? Or was it a bad move?

Well, let me say first, I think Fauci’s job when Trump was president must have been near impossible. I mean, the hard thing to get through to the public is that, very often, we just don’t know what the right thing is. And the question is, to what extent should we be precautionary in these measures one recommends? And the scientists have to illustrate that to the politicians and leave it to the politicians to decide.
There was a case in the 1980s—I think I mentioned it in my book—about Mad Cow Disease, which was a new kind of prion disease that wasn’t understood at all. And it killed up to 100 people. The government took rather excessive precautions against it, like banning beef on the bone. But the reason that was not irrational was that the Science Advisor, and I knew him at the time, said “Well, we are going to have 100 deaths.” But if the politicians asked whether he could say the chance of a million deaths was less than one percent, he had to honestly answer no. And of course, a one percent chance of a million deaths is more worrying than a definite chance of 100 deaths. And so, given the ignorance at that time, it was right to be overcautious.
And so there are lots of cases like that, when you’ve got to do what is in retrospect an overreaction by preparing against the worst case. So that’s just an example. So I just think that scientists should do their best. And then the politicians have to decide on this, on regulations which may prove to be overly stringent.
But if you pay your fire insurance on your house, and the house doesn’t burn down, you don’t think it was a waste of money. You just accept that as a possibility. I think the public has to understand that there are cases that are analogous to that.
The nineteenth century kicked off the period when scientists wanted to engage the public. I wonder if we’re seeing the end of that cycle, partially as a result of these current conflicts. I do hear people say things like if people don’t have formal training, then they’re not going to understand how the precautionary principle works, or how statistics work.
So, the claim goes, maybe it’s better that these things are kept among qualified experts. Those who aren’t qualified should not comment. Is that a change you’ve seen at all?

Well, I would have thought that the fraction of the public that has got some basic feel for science is still too low. But it’s going up, not down. I mean, there is noise caused by fake news. But I would say that with more people going to university and having at least some classes, the number of people who know a bit of science is going up. It’s still too low. And I have to say it’s worse in America; only recently do the majority of Americans believe in evolution. And that’s far worse than in Europe. But I think, in general, the number is going up.
The other thing is to appreciate and not be bamboozled by statistics. One obvious case is if you have a test for some rare disease, then the false positives may outnumber the actual cases. And that’s a well-known phenomenon, but one has to try and get that over to the public. And it’s not trivial, but it’s not too difficult. So that’s an example of something where one wants the public to actually understand what the uncertainties are.
You’re optimistic, it sounds like.
I think things are improving. And I say in my book that scientists shouldn’t moan too much about ignorance on the part of the public, because it’s gratifying how many people are interested in things like space and astronomy, and all that. Kids love dinosaurs. So, the wonder of science is a separate thing. I think we should be grateful for that.
And I think, also, we should equally bemoan the public’s ignorance of other things. I think it’s just as bad if a low fraction of the public can identify Ukraine or South Korea on a map, or don’t know the basic history of their country. The results of a poll on those would, I suspect, be at least as depressing as when they do polls on scientific knowledge. If you’re going to be an informed citizen, I think you ought to know all those things to have an informed judgment.
I think the number of political decisions that have a scientific dimension is probably going up. And so the importance of scientific knowledge, relative to geographical knowledge, is probably going up a bit as well. But the public—in a proper democracy where it can actually make informed decisions—needs at least some basic understanding of these things, a background in history and geography, et cetera. I don’t think it’s too much of an aspiration to hope for that. Lifelong learning on the web, and all that, can help with this sort of thing.

The Future of Research

In your book, you point out that published papers today are often judged by what journal they appeared in, rather than their own merit. You also say that the Nobel prizes have an effect where, again, scientific fields are seen as valuable because they’re represented in the Nobel Prize system itself.
It seems like what you have in both cases is a compounding distortion effect on attention. It becomes harder and harder for an outsider not just to judge actual research, but to even pay attention to the correct things. How serious of a problem is this?

Well, I’ve been in academia for 40 years now. So I think I realized a sort of randomness in the way these awards go, and the fact that they are in a certain set of fields. They don’t reflect how science is done, because so much is a team effort rather than an individual one. And even when work is done by an individual, it may be someone who is lucky, rather than especially brilliant. So for all those reasons, it’s a mistake to elevate people winning Nobel Prizes as being the great leaders and the great intellects of science, because that’s just not true. And so, what’s rather good is that there’s now a greater variety of awards and ways of recognizing scientists, and in some cases, of recognizing groups.
Going over to journals, this is an aspect of the way that I think that academia is getting a bit sclerotic in the way it operates. It needs to open up a bit. For instance, research is important, and it’s done in universities alongside teaching. But the way research develops may be better with lots of blogs, exchanges, and things like that. And the traditional model—where the only thing that affects your promotion as an academic is publication in good journals—is, I think, a mistake. It’s unfair because many people can make a bigger contribution by outreach, or by having a good blog, and things of that kind, and we ought to recognize that.
That’ll make academic careers more attractive. Because, being in this world myself, I do worry that becoming an academic is a less attractive career path now than it was when I was young. That’s because there’s more bureaucracy and more audit culture. And also other forces, demographic reasons. Promotion is slower.
An American example of this is that at the NIH, where the average age when you get your first grant as an investigator is now 43, or something like that. Shirley Tilghman, the former president of Princeton, was on the committee that found this. And she was worried! She contrasted this with when she was young—she’s about my age right now—when she got a PhD, then did a postdoc, and then got a grant and opened up her own lab. People can’t do that now. At that time—we’re talking about the late 1960s—the young outnumbered the old, because there’d been an expansion of higher education. And people also retired, they didn’t stay on after retirement.
This is worrying, really, because I feel that the people who are going to be most deterred by this difficulty of getting fast promotions are just the ones you want to keep in: the people who are enterprising, flexible, and would like to achieve something distinctive in their thirties. It was possible in the past, but is less so now, and so my worry is that they will go into something else. Well, we want some of them to go to start-ups and into other professions, but we want some of them to go into academia. Academia can’t depend just on the nerdish elements and people who are happy to spend most of their lives writing grant applications.
So I think the health of academia, and of academic culture and research culture, is a bit under threat for these reasons I just mentioned. So there’s got to be some ways of recognizing achievements, other than through those journals. And also a way of perhaps encouraging more independent scientists.
The thing is, people have been studying these problems for a long time. Everyone’s very aware of things like the replication crisis, the grant problem, the problems of peer review, and so on.
It seems like the institutions have a very hard time adapting even to what we know, or to quite well-established criticisms of the way they work. So how could such a thing actually happen? How could you actually change the momentum?

Many things that I grumble about most are problems created by academics themselves. So they need to change. There’s some hope. In fact, I’m on a committee on research that was set up by the American National Academies. And I’m one of two Brits on this, as a former member of a National Academy. They’re trying to address this sort of thing. And I think collectively, they could have some influence on academia. So I do have some hope of changing these criteria.
But of course, there is a problem. It’s very easy for people who don’t have real credentials to get a lot of attention via social media, right? There’s got to be trade-offs to make sure that people who do solid work without much publicity get recognition, but at the same time avoid too much bureaucracy. And one related issue is whether pure research, which is clearly important as the basis for future technologies, is still best done in universities. Or should we shift more away from that model—which we in Britain and you in America still have—to a system where there are more standalone institutions?
The disadvantage is, then you don’t get a direct link between the professors and the students. But you would give people the opportunity for full-time, long-term research projects, which they can’t do now because there are more distractions and more administration than there used to be in academia. And because most grants are given with reviews—you’ve got to do something good every two or three years, you know, and can’t do long-term stuff.
So I think the balance is shifting a bit in favor of long-term institutes, separate from universities and especially in some health topics and in technology—for clean energy and things of that kind, where the social need is a mixture of pure and applied science. For bridging the gap between the university and commercial work. I think there’s a case for more standalone institutes with specially directed targets.
Do you think that the research paper is still a useful unit of research?
Well, I think it’s one option. But you know, someone could do a successful blog. And, of course, someone might do a long-term project and write a book. So I think, the idea that you’ve got to write a certain number of papers in a three-year period, that’s obviously a constraint on your choice of topic. You’ve got to choose a bite-size topic where you know you’d have good results within three years. And that may militate against working on really important long-term projects.
So I mean, there’s virtue in the research journal and research paper. And in topics like philosophy, it probably has a bigger role. Although even there, I think it is a mistake to sort of focus too much on that compared to blogs and papers in the wider literature.
Could finding more ways to fund independent research be a good idea? What strategies should people have on that front?
Those who currently aspire to academic careers face a nastily competitive and insecure environment, bedeviled by audit culture, where the requirement to meet short-term targets impedes a focus on long-term risky projects. My earlier generation was far luckier. Academia needs at least some of the people with ambition and flexible talent who hope to achieve something by their thirties.
It’s good, of course, if some of these people create start-ups. Better still if, having made money by their forties, they become “independent scientists” in the mold of the independently-wealthy Darwin and Rayleigh in the nineteenth century, and Edwin Land and James Lovelock in the twentieth. Indeed, we need more such people in order to avoid groupthink.
A related issue is how fastidious we—or universities—should be in accepting donations. This is an issue in the UK. It’s not just cases like the Sacklers [a U.S.-based pharmaceutical dynasty often criticized for their role in the opioid epidemic]. Donations from leading fossil-fuel companies are being declined, as are those from defense contractors and from countries like Saudi Arabia. But I think there are inconsistencies. For instance, those who make billions from “crypto” are surely socially damaging—the typical non-savvy “investors” lose just as those who engage in online betting do. But these “super-rich” are accepted as prime donors to ethically-sensitive organizations like the California-based “Effective Altruism” group.

Lessons From Public Service

From 2005 to 2010, you were president of the Royal Society. You’ve mentioned that, in retrospect, you should have been more “activist” as a president. What would you have done differently?
The Royal Society has a very broad remit: science itself, advice on government policies, and a strong international dimension. In 2005, I was elected to a five-year term as the Society’s President. It’s an honorary post, and therefore can only be part-time for anyone who is neither retired nor independently wealthy. But there were many activities—fundraising, engagement with Fellows, “representational” events, attendance at inter-academy meetings overseas, and so forth—where I felt the Society would have benefited from a full-time President.
I believe that all scientists should, as individuals, be politically engaged. When their own work is concerned they have a special obligation to foster its benign applications and to warn against its downsides.
It’s less clear how academies and learned societies should become advocacy groups for specific policies. Clearly, they should offer assessments of scientific issues and policy options; they should make recommendations within their range of expertise; they should offer views on the curriculum of schools and colleges. And the need and scope of regulations on dangerous pathogens, climate, and so on can best be addressed by inter-academy dialogue.
But academies should not adopt any collective stance that’s too controversial, either through being overtly party-political, or being ethically dubious in many people’s minds. For instance, should academies advocate the building of nuclear power stations? This is an issue where opinion in many countries is roughly equally split, both among people with genuine expertise, and among those with none. My line was that the Royal Society should not take a collective view on this, though I expressed my personal view in favor of R&D into improved fourth-generation nuclear reactors. And of course, there are other issues where there’s an ethical divide. For instance, the deployment of genetic techniques for human enhancement.
What about issues when there is a strong consensus among experts but some “dissidents” exist? This sharpened up for me in the context of the climate debate.
Our policy was that a collective Royal Society statement required endorsement by the Society’s Council, which includes the officers plus eighteen elected members. There are around 1000 UK-based members altogether, among whom there will obviously be proponents of “dissident” viewpoints, but these cannot expect to “veto” a statement. On this basis, the Society endorsed the UK’s Climate Change Act, which enshrined the goal of major cuts to CO2 emissions, despite opposition from some “climate deniers” among our members.
Apart from climate policy, another issue that aroused controversy, though fortunately one peripheral to the Society’s main agenda, stemmed from a vocal faction of “New Atheists”—best described, I think, as small-time Bertrand Russells. There was little in their views that he hadn’t expressed more eloquently decades earlier. My line was that the Society should be a secular organization but need not be anti-religious. Of course, we should oppose, as Darwin did, views manifestly in conflict with the evidence, such as creationism. But we should strive for peaceful coexistence with mainstream religions, which number many excellent scientists among their adherents.
This tolerant view would probably have resonated with Darwin himself, who wrote: “The whole subject is too profound for the human intellect. A dog might as well speculate on the mind of Newton. Let each man hope and believe as he can.” If teachers tell young people that they can’t have both God and Darwinism, many will choose to stick with their religion and be lost to science. My own perspective is that if we learn anything from science, it is that even something as basic as an atom is quite hard to understand.
This should induce skepticism about any claim to have achieved more than a very incomplete and metaphorical insight into any profound aspect of our existence. But this need not prevent us from appreciating the cultural traditions, rituals, and aesthetic accretions of religion, and its emphasis on common humanity in a world where so much divides us.
You also sit in the UK’s House of Lords as a crossbencher. What has your experience been like in this role? Where did it prove an asset, and where was it a hindrance?
In 2005, I became a member of the House of Lords in the category of “people’s peers.” This process involved being nominated, and then, if shortlisted, being interviewed by a panel. It’s important that peers in this category should be “crossbenchers,” with no party affiliation. And, because they don’t take a party whip, they can vote as and when they wish, without obligation to respond to a party’s call. I was an opponent of Brexit, for example—and feel sadly vindicated when I see the mess we’re now in.
Most new peers enter via a different route: nomination by the Prime Minister or party leaders. Numerous such appointments in recent years have been criticized as rewards for donors, or cronyism: there’s a widespread view that reforms are needed. But membership remains a privilege, even though perhaps less of an honor.
I speak in the Lords on educational and scientific issues, and on some topics I care about—legalization of assisted dying, for instance. But I honestly haven’t been very active except on the select committees and “special inquiries.” I’ve tried to use the Lords, along with other commitments, to raise awareness of a topic that I’d devoted much of the last few years campaigning for: we need to prioritize the prevention of catastrophic risks. These include not just slowly emergent catastrophes like global warming, but those stemming from misuse—by error or by design—of ever-more powerful cyber, AI, and biotechnologies. I also helped set up the Centre for the Study of Existential Risk (CSER) in Cambridge.
Are there fields of knowledge today that you feel are not getting sufficient attention, whether from scientists or the public?
As science advances, its frontier with the unknown becomes more extensive. Among the most exciting areas on the current frontier are synthetic biology and robotics. But I think we need to keep a focus on the challenge of properly nourishing the 9 billion people who will be on Earth in 2050. Doing this without encroaching on or despoiling natural habitats will require novel technology: genetically modified crops, artificial meat, and so forth.
But let me put in a plug for astronomy, the grandest of the environmental sciences. Thanks to improved instruments on the ground and in space, this subject is becoming broader. For instance, we know that most stars are orbited by retinues of planets. There are billions of planets in the Milky Way that could be abodes of life—but are they?
Astronomy is also a “fundamental” science: understanding the very beginning of our expanding universe will require advances in physics that may take us further from our intuitive concepts than quantum theory and relativity already do. We must be open-minded that these concepts may be too deep for human brains to grasp—at any rate, any more than a monkey can understand quantum theory.
Ash Milton is the Managing Editor of Palladium Magazine.
 
Fuck if I remember most of them. I vaguely recall us reading these kid detective books where the boy is hearing or experiencing a crime and situation and then solved it, and we're supposed to figure out from the story how he solved it before turning the page to the explanation. Like one is his dad is a cop trying to find people who robbed a bank and come across a hitchhiker who's been in the heat all day that offers to give them directions, and the boy realizes that the hitchhiker is one of them because the hitchhiker had chocolate and broke off a piece to share.
Encyclopedia Brown?

The collapse of fields like nuclear engineering and ATC will be great fun, but the crisis is not limited to the exciting, high-IQ sectors. Regular wageslaves in offices see at least an order of magnitude more email traffic about diversity, equity, inclusion, and allyship than they do about training or best practices. Attempts to identify flaws or shortcomings in a system and correct them are rejected and mark one as a troublemaker, while participation in DEIA may be richly rewarded. (Fixing a flawed process that costs time and money or produces inferior results is too difficult, shut up and get back to work; but human nature is deeply flawed and we can fix it if we DEIA hard enough.) High-performing young men won't find any refuge in less prestigious middle class occupations.
 
Encyclopedia Brown?

The collapse of fields like nuclear engineering and ATC will be great fun, but the crisis is not limited to the exciting, high-IQ sectors. Regular wageslaves in offices see at least an order of magnitude more email traffic about diversity, equity, inclusion, and allyship than they do about training or best practices. Attempts to identify flaws or shortcomings in a system and correct them are rejected and mark one as a troublemaker, while participation in DEIA may be richly rewarded. (Fixing a flawed process that costs time and money or produces inferior results is too difficult, shut up and get back to work; but human nature is deeply flawed and we can fix it if we DEIA hard enough.) High-performing young men won't find any refuge in less prestigious middle class occupations.
Encyclopedia brown sounds right.

I do think there will be a point where we will have to re-test every scientific theory, and possibly start from scratch, because there will have been so many false results put in to make specific scientists look good. I'm confident we will have to throw away almost everything we have about psychology and treatment. For starters, if you look at dogs their behavior and temperament is largely breed specific. What if what we consider mild neurodiversity is simply genetic predisposition? Like French will be more artistic, Germans more autistic, italians more social, English being slow, jews being sociopathic, so on and what we think is someone being strange is just them being what's normal for their heritage? Or at least to an extent?
 
I ve been mulling this article over for the last few days.

@Otterly Like Ian M banks there's another UK author Charles Stross, who does james bond, meets Love Craft with a twist of British bureaucracy.

He often points out in his book that good managment if like air you only notice it if its gone. Same with the idea of institutional knowledge, having an organization that isnt dependent on a hand full of people who know how shit works, hence the creation of the tyranny of the bureaucracy. Things should never have gotten to the point that the nuclear plant had a "dave"

One part of the article that struck me was that issue with organizations out sourcing credentialing to a hand full of schools, and those schools then out source their admissions to standardized tests.

The articles talks how diversity is corrupting the system. which i I think sadly pulls from the above issue. Why dont organizations just recruit and train directly? what does harvard do that a big company cant?

I grew up in what people would call an under served community. The schools are bad, and produce bad students, who get shuffled off to a Jr college that is also bad, who then shuffle people to a university that 20 years ago was called resume stain.

So I have direct experience in not being taught and moved right along and to take people from that level and have them go against people that came from better funded better support communties yeah its like clubbing a baby seal.

While diversity is coded words for non white, remember asians being just 6% of the US pop make up a larger amount of educated workers, because their culture and community value education.

and before we start screaming nigger nigger, their are lazy white people who grow up in trailer parks that love to be ignorant just as much as niggers.

I personally would love to see a real change in that at if you cant read at a certain level you get flunked out of jr high, and high school.

college is the worst because by creating risk free loans the system creates infinite demand for worthless degrees, and a college wont expel students cause then it losses money. the incentives are all wrong.

I have a cousin who got a english degree and for 3 years hes work as a sub for the local school district that runs a program where the district will pay you to get your credential he's sat on the zoom interviews and see how the recruiters behave toward colored women who basically say "in 3 years I dunno I want to be a mom" and shit like that. I m sure this program is run by grants and his being a white male is mark against him.

so the school is losing a rather motivated teacher because it has to make up for some larger inequality.

15 years ago I knew a guy who tried to get the county to pay for him to learn welding and was basically told illegals who dont know english will alway jump the line ahead of him for the program.

But to loop it back to meritocracy, I can see how people coming from communities that dont value education lack resources and then are told to go compete with kids whose families from day one had the resources to invest back into their kids is not really a meritocracy, but at the same time the changes to the system are not addressing those issues
 
But to loop it back to meritocracy, I can see how people coming from communities that dont value education lack resources and then are told to go compete with kids whose families from day one had the resources to invest back into their kids is not really a meritocracy, but at the same time the changes to the system are not addressing those issues
You got the nail on the head several times there but this the real crux of where people are being let down. I grew up in a very poor northern town where the norm was not university and the schools were crap. I said I wanted to go to uni and was brushed off and mocked. But I went, and when I got there I was faced with a whole cohort of peers who had been expected to go regardless of being smart or not. It was the first culture shock of several. All these kids had had tutors. None had to do homework perches on the stairs becasue they shared a room with multiple siblings. They were tutored and cosseted and they’d breezed I. To this fancy uni on who their course tutors had been and knowing exactly what to say. Shortly after this there was the big push to get more state kids into uni and a lot of that stopped.
I found myself almost resentful of these kids who were mediocre but had got into the best course in the uk for what we were doing because of public schools. And before I’m accused of making that up by anyone, the tutor for the course, a very posh type, told me my admission had been an error becasue our school, despite being crapchester high, had a fancy Latin name that was almost identical to a fancy private place. Unprofessional thing to say, no? I think he was trying to be nice in an odd way. Like ‘oh we shouldn’t even have had you but you’re ok.’
This is the point poorer communities are failed. They don’t have a background of education and most don’t value education (lucky that my parents did even though they’d not been themselves - of the generation who grew up post war and left school early, and their parents were dirt poor.)
If you want more diversity of social class - and that is the real UK issue not race, it’s CLASS, you need to get programs to improve access via getting kids up to the standard to apply right at the start. Not by letting unsuitable people into the system. By early intervention and better schools and gradually teaching the value of education.
In the USA and in the UK you’ve got immunities who dont care about education, homes where no one reads, homes where kids have nowhere safe to do homework even if they wanted to. Then even if they want to learn their schools are dumping grounds and containment pens. Fix that. All those articles about these horrendous inner city schools always seem to have one speccy kid who really wants to get out but has no hope due to the school and home being a mess. Those kids can be reached and helped.
All that takes time and money and grit, hiring a tranny just needs a box ticked. But just picking out people who can’t do rhe job but fulfill a diversity box is a disaster. In any complex role it’s a disaster and in any role woth real impact on lives or safety it’s a life threatening disaster.
 
Assuming I'm the CEO of a fortune 500 company, what am I to do? How do I hire based on competency without being sued for discrimination?
1. Online skill tests. Never mind the 100 question personality bullshit, but pull a highschool English test from 50 years ago (yes i knew someone who did this) and you'll quickly find out who is properly literate and who got social-promoted out of 12th grade.

2. Check the specific degree and field of study when you call to verify college transcripts. If they have a Bachelor's but neglected to mention it was in gender studies instead of something relevant to the job, that's a legit reason to deny them.

3. Examine social media profiles of candidates. Use tor and a vpn so that a discrimination lawsuit can't instantly nail that HR was checking them out.

4. Search email addys and usernames to see if they're a member of a publicly accessible fag/libard/troon/communist/etc. group. Examples: Reddit subs, Discord channels.

Fuck if I remember most of them. I vaguely recall us reading these kid detective books where the boy is hearing or experiencing a crime and situation and then solved it, and we're supposed to figure out from the story how he solved it before turning the page to the explanation

Yep that's encyclopedia brown. They were mildly famous back when for the author's zany puns at his adversaries. I won a kiddie contest for my imitation: "They should have called themselves the Gas Station Toilets - they never worked and were always full of crap!"

 
Last edited:
The articles talks how diversity is corrupting the system. which i I think sadly pulls from the above issue. Why dont organizations just recruit and train directly? what does harvard do that a big company cant?
Outside of things like medicine and science (and by science I mean chemistry and other related things); damn near every job was OJT (on the-job training). One of the problems is the commodification of college. In my parents day, you could still graduate from high-school, walk into a place with a help wanted sign and be told "Come back tomorrow." If you're lucky or persistent, you can also do it at a good company then spend the rest of your working life there and even do well enough to raise a family; and they did, my dad didn't even graduate high-school, but the job he got was union, 401k, and other stuff, while their only requirement was "not be retarded." But even then, they had the "College gets you the big bucks" speech.

But what has college become, a dumping ground for what should mostly be taught in high-school or a substitute/fill-in/grift for other jobs that have certifications. My own field, I do IT, now IT is very vague as it can mean a number of things; but mainly, we make sure computer shit operates properly. It's also a meme/dumping ground field, because someone thinks they like computers or videogames and why not. But then people start looking and can't figure it out; do they go to school and get the degree, or do they get certifications?

As someone who's done both; certifications, as they're specific to job use, cost a lot less and have a far less time requirement, and can get you in the door faster. All a degree is, is if you seriously want to get into Management (other than that, it's resume padding); if you enjoy clacking on keyboard, pulling wires, and thinking of a hundred different ways to tell someone they're fucking retarded (in your head, of course), then you don't need a degree. In fact, many places won't even require certs at the start, you'll get a grace period of like 3-6 months to get it. But colleges are still there, and still charge an arm and a leg, for what should only be a couple months of routine study and a couple hundred bucks for the cost to take the test.

College had become commodified and is backed by federal tax dollars, and thus expanded programs for "training." And the companies then decided to go with that, instead of teaching people themselves... and I'm not a commie, but don't get me started on unpaid internships with a possibility of getting in the door.

And I'm not entirely against college, just mostly what it's become; some quasi-monolith that bestows knowledge and thus superiority, because you are a learned individual... says so this piece of paper. They have their place; namely for big brain things, medicine, hard science, and the like. Everything else, they could have a place for, like certificate programs for fields, or even being a more technical place with gear and labs and shit for you to get hands on with (becomes some people are kinetic learners, and no matter how much they read or listen or recite, they need to have their hands do it).
 
Last edited:
This has been a major issue for the last few years in a lot of technical industry: white guys are retiring and backfilled with diverse candidates who either can’t or won’t learn the job. The white guys then come back as consultants who make $400-500 an hour to unfuck what their vibrant successors caused but this will only last a few years as they decide they want to truly retire and won’t stay on for any dollar amount. Human Resources and a large chunk of management refuse to see this as a problem because they don’t believe a fecking huwhite male could actually add value to an organization. The last cohort of white male technical and engineering leads will be retiring within the next couple decades. Things are going to get really interesting in 15-20 years. Shit like flipping on a light switch and expecting to see a light turn on or expecting clean water from a faucet is no longer going to be considered a given in the first world.
 
While I agree with (and have witnessed firsthand) everything in the article, the title led me to think this was going to be something else that is also responsible for the declining quality of life in America. Our structures and equipment are aging, and failing. This includes things from literal infrastructure (the author speaks of things like the degrading electrical systems in California and Texas), to the dimentia-ridden national government.

The author almost touches on this, but not quite in a way I was hoping to see. The generation that built these structures are retiring and passing away. As we lose firsthand experience, we lose a massive pool of information. There is a difference between recitation of a textbook and being able to come up with something on your own (or from collaboration). I had once worked on very specialized equipment, where a lot of it's maintenance and faults were passed on through oral knowledge. It's developers have passed long ago, and an intructor I was taught by (who was taught by one of those original developers of that system) probably retired in the last few years. Third generation knowledge is where expertise becomes recitation.

I attended a lecture recently from someone who was trying to re-write a major document (a constitution?) for his African countries' government. He placidly read off his typo-ridden powerpoint, placing the blame of this failed nation on racial inequalities. Blah blah blah. Eventually, one of the other African attendees stood up to speak during the Q&A, and had the balls to say, "You speak of race, but forty percent of the nation lacks electricity. Surely, this is a contributing factor to these conflicts." I don't remember the response, but he was the only one to ask anything related to reality instead of pondering racial equalities.

I am also currently in a course that has me reading a lot of essays written by my adult peers. The poor quality is mindblowing. I probably don't need to go into this any further. I'm pretty sure this school is one that no longer uses any sort of SAT score for admissions, and it shows. Most of these students have probably never picked up a book on their own free will.
 
You've completely fucked over the practical application group (which to add, most savants belong to), and 90% of your remaining candidates suck in actual application but are still considered the only valid option because they're good in the theoretical part, and nobody will know they suck in application because academia doesn't test for that.
I would like to posit that the practical application group are also probably the group of people most likely to figure out how the system has been set up to be specifically hostile to them and are the ones most likely to say "Fuck all of this shit." and go do something like Find a Wagie tier job that will pay the bills with a part time job and play Videh games with the rest of their time.
 
You can extend this to most modern jobs. The modern era's fetish for overpriced and useless education has completely murdered the job market for everyone. Actual competency and experience is lesser in terms of what useless piece of paper you went into debt for. Everyone knows we're lacking in trade people but nothing real has been done about it. Society looks at the people who maintain it the most like they're worthless failures. Is it any surprise it's all going to shit?
One of the big factors behind this is the increase cost of firing workers. If I run a business and can fire anyone for any reason, then I'm perfectly willing to hire literally a random hobo off the street because if it doesn't work out? I can just get rid of them. The instant you add extra hoops and measures to make it harder for me to fire someone, is the instant I start adding extra hoops and measurements to my hiring process. If I have to go through a 1200 step process to fire someone, then I'm going to be real careful about who I hire in the first place and make it a 1200 step process.

The author almost touches on this, but not quite in a way I was hoping to see. The generation that built these structures are retiring and passing away. As we lose firsthand experience, we lose a massive pool of information. There is a difference between recitation of a textbook and being able to come up with something on your own (or from collaboration). I had once worked on very specialized equipment, where a lot of it's maintenance and faults were passed on through oral knowledge. It's developers have passed long ago, and an intructor I was taught by (who was taught by one of those original developers of that system) probably retired in the last few years. Third generation knowledge is where expertise becomes recitation.
Quite true. And it doesn't help that the culture and marxism actively encourages disassociation from the older generation. "Oh what do they know? They're all just hateful bigots and racists anyway. We'll all be better off once they're all dead."

It certainly bugs me that in pop culture it really hammers home this idea that if you're bad in one respect, you're bad in every respect. So if you're a hateful bigot, there's no way you could possibly know about anything else that's going on. Like in conversations I've watch people's brains go BSOD trying to get them to comprehend that maybe - just maybe - even the worst person has some tidbit of knowledge you're going to need to know before they are gotten rid of.
 
What if we had an education system where we taught ourselves? Like we had the coursework and guidance, but it's expected that the older students would teach the younger students, and collaborate between themselves to learn their coursework, with the teacher explaining and clarifying more complicated things and helping when they get stuck?
 
Quite true. And it doesn't help that the culture and marxism actively encourages disassociation from the older generation. "Oh what do they know? They're all just hateful bigots and racists anyway. We'll all be better off once they're all dead."
That's why I hate the "boomer" meme despite not being a boomer. Not only is the meme factually wrong as the people responsible for ruining the West like Soros are too old to be boomers but woke millennials are worse than the most head-in-the-clouds boomer. Remember, Nixon won the "Silent Majority" of boomers who were fed up with the hippy crap of the minority (and then the Deep State ousted Nixon, but that's an whole other story). Kent State was applauded at the time because finally, for once, the National Guard had put antifa in their place. The average boomer did not vote to outsource manufacturing to China or import half the third-world; they were betrayed.

It's a pointless generational war designed to distract people from focusing on their real enemies, which span across all demographic groups. Commies love to say that wokeism is a distraction from the class war, but the class war is also a distraction. There are plenty of horrible poor and middle class people and plenty of good rich people who made their money honestly; they're just not famous for obvious reasons. Your average DC bureaucrat is a far greater threat to society than a billionaire like this guy who made his fortune manufacturing plastic films and is busy spending it buying every super car ever made.
 
I would like to posit that the practical application group are also probably the group of people most likely to figure out how the system has been set up to be specifically hostile to them and are the ones most likely to say "Fuck all of this shit." and go do something like Find a Wagie tier job that will pay the bills with a part time job and play Videh games with the rest of their time.
This is hauntingly accurate, because I know multiple people this description fits like a glove. Scorned by the system to the point where they've checked out of society and just wanting to get by on escapism not caring if the rest of it burns.
 
Back