How Social Engineering Drives Technology
Technology doesn’t disrupt society. Society adopts technology through a process of social re-engineering. This can’t happen without functional institutions.
This article bywas originally published at Palladium Magazine on May 28, 2020.
Common wisdom holds that technology disrupts society. That is, a technology is invented, and then a natural and inexorable process of spontaneous order changes society to use that technology. But the reality is that society is itself an engineered system that changes more by deliberate planning than the common wisdom is willing to admit. If anything, it is society that disrupts technology.
You can listen to this article in full with the audio player below:
From the design bureau’s office politics, to the organization of industry, to consumer behavior, to national security, social technologies enable and regulate what technology we use and how we use it. Without socialization, most of us wouldn’t know how to use any particular technology, or even what it was made for. Technology only reproduces itself through instruction or imitation—and only when embedded in the larger social organism that puts it to use. Every device not only has a manual but a social context. It is then social rather than material facts that drive or hinder the development and adoption of technology. The technologies we integrate into society become the foundation on which future technologies are built. We accept or reject technology together as a society.
When we talk about technology, we are talking about mass use of smartphones, gigantic interstate highways, a laptop in every lap, and so on. We are not thinking about a lone tinkerer’s invention. The reason is that technology can’t be sustained by individual genius or fancy for long. The succession problem is an obstacle that snuffs out even the most brilliant spark. Archimedes’ elaborate weapons of war only vexed the Romans until his execution by a legionary. Many technologies are only feasible at scale.
Invention itself is rare, but more common than most assume. Many marvelous machines are built to satiate a craftsman’s curiosity, or to amuse and impress the wealthy. An 18th century automaton with beautiful penmanship is technically impressive but of little or no historical consequence. The self-driving cars of the 2010s may prove to be similar machines. They are novelties to show off the technical talent and capacity of particular laboratories rather than something on the cusp of practical use.
An invention does not achieve adoption because of its mere existence, but only when it has found a stable socioeconomic niche. This is the difference between an invention and a technology. The archetype of the blacksmith cannot be reduced to any mere individual, nor to a set of tools, but personalizes an entire socioeconomic niche—one deeply entwined with our thought and life over millennia. These archetypes are even reflected in myths of settled societies instructing us how to think, how to live, and what dangers to avoid. When a technology is so deeply embedded in social practice, it can even survive the collapse of civilizations. The ancient Greeks may be long gone, but their tale of the divine blacksmith Hephaestus and the goddess Aphrodite still serves to warn us of the dangers of neglecting your spouse for your craft.
Since the industrial revolution, the more machines that are used, the cheaper they become. It became viable to build socioeconomic niches based on mass adoption. This adoption at scale is what gives rise to highly centralized halls of production. Factories are enabled by economies of scale and the efficiency of ever narrower specialization combined with the oversight of engineers to optimize entire assembly lines. The industrialist can glance through a single hall and see all stages of car production laid out in front of him. Ideas for how to improve the production process might be justified with lines in spreadsheets, but they originate in seeing and looking. Since the bottleneck on production of machines is almost never the resources used, scaling up the factory only improves these economies.
Getting people to work in these halls was a entirely separate challenge. The industrial revolution required a breakthrough in the ability to educate masses of people on how to use new machinery. This was achieved with a kind of military discipline, using methods learned from prisons, crowd control, and army management to convert recently urbanized farmers into obedient workers. These methods were much parodied and bemoaned in the early 20th century in silent films such as Metropolis or Chaplin’s Modern Times. The rapid spread of the state-mandated Prussian model of education as an aid in economic development owes as much to preparing workers for a lifetime of such discipline as it does to loftier goals like imparting literacy.
Eventually, most of society was subsumed into new industrial forms of life. But this new breed of worker isn’t merely a cog on the producing side of the equation. Factories are naturally consumers for other factories. But what about consumption outside of factories? Production at scale requires mass adoption. During war, military orders are the driving demand. Two or three million steel helmets and three to five thousand tanks can certainly sustain a whole ecosystem of socioeconomic niches. But no society can wage war forever, even if wartime is the origin of much of technology. Militaries lose the ability to tell good designs apart from bad due to institutional decay during long periods of peace, and without active demand, productive capacity decays from lack of use. A big picture perspective on national security recognizes and plans for this contingency, since war eventually always returns.
A peacetime alternative for sustaining a technology is to make it necessary for participation in society and in everyday life. Coincidentally, the mass education and training of the new industrial society created new consumers as well. Urban workers become not just a labor force for one factory but part of a growing market of consumers for many factories.
Why is a car part of normal life and behavior? To introduce the mass use of a car, you have to teach many drivers how to drive, but it is less obvious that you also have to teach these drivers where to drive and invent those places if they do not yet exist: commuting to work from the suburb. Holiday trips. Perhaps shopping. Maybe a trip to a fast food restaurant. Those last tasks are sometimes recognized as engineered desires, but they are also engineered social patterns. It’s one thing simply to build one car or a million cars—it’s another matter to make people want to learn to drive, to give them the roads and highways to drive on, and make the car the cornerstone of modern transportation.
These social patterns and institutions were designed, not discovered. Once designed, we were instructed how to participate in them, to support their economies of scale. The advertisement doesn’t merely tell us what to want; it shows us how and why to want it. One key result of this engineered social change is that useful industrial capacity is sustained and improved.
Changing consumer behavior at scale through mass marketing was perhaps one of the greatest breakthroughs in social engineering to match newly available technology. Familiar marketing techniques are recognizable in the work of American evangelists in the 18th and 19th century. Every few decades, traveling preachers and mass pamphleteering would change the demographic balance between denominations and the religious practices of an entire generation. Historians have termed such periods ‘Great Awakenings.’ Mormonism is a uniquely American faith whose rapid growth best shows how effective the techniques are. Marketing was further perfected and developed with the arrival of Viennese psychology to the U.S. in the 20th century and ultimately applied to political and economic behavior, as described well by Sigmund Freud’s nephew Edward Bernays in his 1928 book Propaganda. The same methods that portray cigarettes as torches of freedom can later make them harbingers of cancer.
The post-Communist Eastern Europe of my childhood in the early 1990s was a world where mobile phones were introduced at the same time as many people were purchasing their first dishwashers and microwaves. This consumer-accessible bounty in technical and electronic equipment convinced my 7-year-old self that immense technological progress was inevitable. The futurists I watched on the English-speaking Discovery channel agreed; they predicted that in the distant future of 2020, we would merge with AI, leaving our physical bodies behind.
We were all wrong. There might have been a rise in living standards, but it wasn’t evidence of how often new technology was invented or even how much technology was integrated into society. America and then Western Europe had learned how to integrate the microwave into society long ago.
America excels at early and widespread adoption of novelty. Marketing, both ideological and economic, remains one of America’s key strategic advantages. The strange focus of state-sponsored Depression-era propaganda on consumption becomes understandable in this light. As a consequence, the 20th century citizens of the U.S. didn’t embark on consumption as mere personal indulgence—but as a pro-social and patriotic duty, acting on the highest authority of the land. The purpose was to create a social order which sustained particular technologies and industries. In the aftermath of 9/11, President George W. Bush called on Americans to go shopping and travel to Disney World as an act of defiance against the terrorist goal of instilling fear. Without consumption, the American machine stops. Something else may take its place.
Not all of our advanced technologies can be stabilized into social niches through mass production and adoption. No technology originates in mass adoption itself. For something to be marketed by either Steve Jobs’ Apple or FDR’s National Recovery Administration, it first must be invented and developed. The acceptance or rejection of a technology isn’t just a matter of the whole society adopting it all at once. Rather, particular organizations first develop the technology and then undertake—much like Mormon missionaries—to alter society to accept it.
Surprisingly, even organizations dedicated to the creation of new technologies seem to become hostile to innovation over time. The underlying reason is that contrarian ideas—as all new technologies are by definition—almost never survive committees. How could they? By their very nature, they cannot have the majority on their side. If they do, it is because they have a powerful champion who has cornered the committee, an uncommon skill. This simple observation rules out the most frequently proposed reforms of philanthropy, academia, and government as ways to kickstart innovation. It opens up new ones, too.
Committees are commonly used in our society because they create the illusion of avoiding risk. They are a wonderful device for avoiding responsibility while making the institution seem more rather than less accountable. Modern institutions have overloaded on actual risk while fleeing the appearance of it, especially if you count “failing at core mission” as a risk. Such aversion to the appearance of the unusual can’t be justified on economic grounds. Rather, it is a socially driven aversion. There is no immediate reward for making a meeting awkward either in the boardroom or the engineering room. There’s not even a reward for making it surprising.
In a start-up, a difficult and aggressive personality might suffice at first. But larger corporate and government-directed technological efforts work best when they are clearly goal-oriented. Whether the Manhattan Project of the Trinity Test or the NASA which put a man on the Moon, there was a clear objective, backed by enough power to overcome social inertia. A functional institution is much easier to design if you have a yardstick with which to measure it and the political power to build it. For Oppenheimer’s bomb project, the yardstick was creating a weapon that was usable before the Germans and Japanese were defeated by conventional means. For Wernher von Braun’s NASA, it was landing a man on the moon in one decade, and most importantly, before the Soviets.
Oppenheimer and Von Braun both received temporary grants of political mandate, revoked with more or less fanfare after their missions were accomplished. Their projects are excellent examples of how power centers benefit from lending power to achieve a technical objective. Their Cold War fruits are also perhaps examples of why we sometimes question the very legitimacy of technology. There is, however, no way out but through. Either social or material technology must devise solutions to such problems.
The Los Alamos and NASA of today aren’t focused on external goals, but rather on self-preservation and defending their budgets. Neither is very effective, and they mostly subsist on prestige earned long ago. Their social machinery has grown too unfocused to be able to build and maintain the technological breakthroughs they once could. Significant failures come as a surprise, purported to be completely unpredictable. As the Nobel Prize winner Richard Feynman noted in the Report of the Presidential Commission on the infamous Challenger accident that killed seven astronauts:
There was no way, without full understanding, that one could have confidence that conditions the next time might not produce erosion three times more severe than the time before. Nevertheless, officials fooled themselves into thinking they had such understanding and confidence, in spite of the peculiar variations from case to case…Let us make recommendations to ensure that NASA officials deal in a world of reality in understanding technological weaknesses and imperfections well enough to be actively trying to eliminate them. They must live in reality in comparing the costs and utility of the Shuttle to other methods of entering space…For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.
Feynman could only contrast the Challenger disaster with the functional Los Alamos he knew in his youth. This 1986 critique of NASA reads only more damning as the agency approaches nearly one decade since its last manned flight on July 8th, 2011, when it retired the 1970s designed Space Shuttle.
In its place is the Demo-2 mission, which is scheduled to launch from Cape Canaveral, Florida, and bring two astronauts into orbit on an American rocket. Demo-2 is the result of a functional institution—but rather than a revived NASA, it is SpaceX, a comparatively new private company. This is not a success of privatization but circumvention. Live players like Elon Musk find ways to maneuver around decaying bureaucracies, at times delivering good results to their dysfunctional patron, even when the patron does their best to stop them. This process of new development has been an act of social as much as material engineering. Musk has had to maneuver politically to make room for SpaceX, to spin new visions of space exploration to motivate and raise the status of his engineers, and even to manufacture his own demand for his technology.
Plain language accounts of technology’s social origin are few and far between, both because it is a demanding subject in its own right and because we’ve built our society around several distinct and mutually incompatible stories about this powerful force. The stories are planks of everything from the ethical standards of professions to the legitimacy of government institutions. Almost no written theory is intended to be a practical guide to shaping such progress. Rather, it is written to inspire individual technical skill or to legitimize economic and political arrangements. Ideally both.
Many scientific hagiographies take the romance and achievements of historical scientists, industrialists, and inventors and then transfer these halos to institutions that would have never tolerated those pioneers in the first place. The Teslas and Fords of the world are many things, but they are not agreeable to us. This invites distrust, sometimes justified. The groundbreaking computer scientist Alan Turing was likely killed by British intelligence on suspicion of Soviet espionage, in what was for generations described as a tragic suicide. Teenagers motivated to master technical subjects by the promise of moving history eventually find themselves harnessed by romanticized but declining institutions that go to great lengths to ensure that history moves no further—such as Google or contemporary Los Alamos.
Any pursuit which requires developing implicit expertise and repeated practice will benefit from individual instruction. The esoteric and well-grounded knowledge needed for creation can’t be achieved at industrial scales and tolerances. If you attempt to lop off one end of a bell curve, you’ll always lop off both. Bureaucratic evaluation of people at scale, no matter how much it is aimed at merit, ultimately always first tests to see if someone is an outlier. A mass system has no more place for outliers than an assembly line. Perhaps this makes Six Sigma the M-theory of all theories of government espoused by the great powers of the 20th century. The industrial mass society we built to sustain and utilize many of our technologies undermines the creation of new ones.
Describing how things are is almost always understood to be an implicit justification for how things should or could be. Hume’s Is-Ought Problem is notable precisely because everyone ignores it. An account of the social origin of technology, then, necessarily carries political weight. This weight can be well-suited to a revolutionary payload, as was well-demonstrated by Karl Marx and Ayn Rand. Shaping history through technology requires precisely such an understanding, however. So this is a risk worth taking.is the founder of Bismarck Analysis, a consulting firm that investigates the political and institutional landscape of society. He is also a research fellow at the Long Now Foundation. You can follow him on Twitter @SamoBurja or subscribe to the .