Let’s consider the idea of “progress”.
We use the word to mean a lot of different things. I’m not a linguist or etymologist, so I won’t waste time quoting dictionaries or encyclopedias. I think most people have a reasonably good intuitive sense of the term. But we should try to distinguish the different senses.
Progress can be directed or undirected. It can be absolute or relative. It can be end-focused, or ongoing. If you have a concrete goal—say, passing a course in school—then you can measure your progress towards that goal as you complete sub-tasks and achieve milestones. If you pass the course, you succeed, and progress ends.
If life-long learning is your goal, that’s more open-ended and relative. It’s still concrete, but not specific. You can measure progress by defining some kind of metric that increases without limit—number of courses passed, or books read, or insights gained.
Culturally, we often talk about two main areas of progress: technology and justice. These are both open-ended and relative. We want more technology—more power over the environment—and more justice—for more people—than we had yesterday, or last year. It’s very hard to measure cultural progress in reliable terms.
What is power? What is justice?
Not everything has to be quantitatively defined, but choosing to measure things is a valuable discipline—it keeps us honest. If we don’t commit to an empirical metric, we can tell ourselves that we have made progress, if it merely “feels like it”. Even if our feelings might be biased, variable, or otherwise unreliable. Feelings are not a rigourous measure of anything real.
There are, of course, many bullshit “empirical” metrics of progress. A famous one is GDP (Gross Domestic Product). This is play acting. It is feelings wrapped in numbers, pretending to be rigourous and empirical. But it is all based on money, which is a fiction made out of feelings of value, which change with fashion and whim.
Economic progress is supposed to be a consequence of technological innovation, and provide more wealth and prosperity to a population. So it is treated like a proxy for both. Even though GDP can increase without any innovation—just discovery a big exploitable resource, like oil or minerals—or social justice—just let a tiny minority of people hoard all the money, while despotically repressing the majority.
Economics is mostly bullshit, anyway.
Nevertheless, technology and justice are somehow connected. But not linearly. More technology does not necessarily translate into more freedom, more justice, or better lives for the median person (let alone the poor and marginalized). Technology is just a fancy word for knowledge and tools, the benefits of which depend solely upon how they are deployed. They can serve injustice just fine, and often do. In fact, more and more often, lately.
The disconnect between technological—and, implciitly, science—and social progress has been a big source of pain and disillusionment for me. As a young kid, I was infatuated with science fiction visions of the future. It’s not that they were all positive. Probably the opposite. Some of my first memories of science fiction include Star Wars, and various obscure novels that mostly featured dystopian or conflict-riven societies. And yet, I was still inspired by the visions of the power of human ingenuity. I simply made the naive leap to assuming that, if we could solve our material problems, we could also solve our social and relational problems.
Even if that were true (though it isn’t), I have recently become much more conscious of a deeper, more pernicious problem with my naive utopian ideology. Stories about galactic civilization make a serious implicit assumption about human ingenuity: that it is boundless. Not just that we can transcend the laws of physics, if we so choose, to break the light speed barrier, or create artificial gravity, or other such magic. But that we could somehow manage the complexity of all that, knowledge, all those clever machines; to keep it all working, and run our societies, too.
In reality, we face serious boundaries of management and control, so-called cybernetics problems. We may have already gone past those boundaries. We have been struggling for decades—centuries—with the second-order consequences of technology, and its uneven and erratic distribution.
As far back as the seventeenth century, major cities could not contend with their own effluence (human and animal waste). Thankfully, we learned how to build better sewer systems. But that didn’t stop us from dumping our sewage into rivers and lakes, and destroying ecosystems, some of which were essential to human health and livelihoods.
A different problem is that of knowledge transfer, both within and between generations. Most knowledge is locked up in silos of expertise and technical training. It’s very hard to acquire advanced technical and scientific knowledge. Most people cannot or will not make the necessary investment of time and effort. (Money is a proxy for time and effort, so need to mention it separately.)
As our collective knowledge increases, the median understanding falls, relative to the average. Most people don’t know anything substantial about how anything works. In the age of machines, this problem was manageable. Anyone with the dexterity, conscientiousness, and tools could disassmble a machine, and learn how it works. This began to break down with the discovery of electricity, and the invention of so many electrical, and later electronic and computational, marvels.
Without working electricity, a core knowledge of electricity and semi-conductors, and appropriate software, a computer’s purpose and workings cannot be determined by inspection. Electrons and electric fields are invisible, to even the most powerful optical microscope.
Something similar can be said for modern medicine, especially micro-biology and immunology. If you cannot see inside a cell, or see a bacterium (let alone a virus or an antibody), how can you draw any conclusions about how infectious diseases are caused, prevented, or cured? Yes, you might use statistics to identify sources of infection (like tainted water), but not the mechanism.
This is only the challenge of the knowledge of the very small. There are also the domains of the very large, the very diffuse, the very fast, and the very slow. And these are only the causal domains. They do not begin to scratch the surface of complex domains.
The more that our best experts learn, the more distant that knowledge is from the typical person. If our hopes for a universally just and prosperous society rest on individual—not just collective—undrestanding of science and technology, then, frankly, we’re screwed.
Yes, we could improve our schools. We could try to build a culture of learning, understanding, science, reason, and wisdom. But can we be real for a second? Why would anyone decide to just start working harder? Especially after a century or more of being told that we deserve whatever we want, whenever we want it, basically for free?
Even without the huge cultural headwinds of consumer self-indulgence, entitlement, and complacency, people are still people. A person has a limit to how much they can and should be expected to learn: how much time they can and will devote to something that gives them no satisfaction or joy. People are not technological machines. They are biological and social beings.
People who intuitively love science and technology are a minority. How many of them only turned to such studies because they could not find a home in a boisterous and extraverted culture that privileges physical prowess? The “nerd” is a stereotype, but it is based in truth. A hyper-competitive bullying society, where “losers” are abused, belittled, and rejected, is hardly a reasonable starting point for a high-tech world of universal justice. The fact is that many people still look down on science, scientists, and in general anyone who values knowledge, understanding, and wisdom. We have created a world that rewards abusers, criminals, and fools (as long as they are rich, pretty, or amusing).
Our technological world is running away from us. We fantasize about machines that can self-host, ultimately creating a rapidly self-perpetuating magical process of increasing power and complexity, and somehow manages itself without breaking apart like a too-rapidly spinning flywheel. It is complete self-indulgent nonsense. There is no empirical basis for such expectations. What science we do have suggests that any such system would disintegrate, for a host of reasons. At best, it would veer off in some unexpected direction. It’s unlikely to deliberately seek to destroy us, but there’s no reason it would consider our needs anymore, once it was autonomous and free.
Really, though, such a system can never be built on purpose in any case. Self-sustaining complex systems must arise spontaneously from the environment. They must be stable and evolving. They cannot be engineered.
What we are facing, then, are the limits of our engineering prowess. We can build more, and we can build bigger, but we are at the end of our ability to build sophistication.
But it’s all a false hope, anyway. Because justice is not a product of technological understanding. Justice is a product of wisdom. Wisdom, in turn, is a product of recognizing our limitations. It is a product of humility. Unfortunately, we are not humble. We are arrogant. The only cure we know for arrogance is suffering. But sometimes, the cure is worse than the disease. Sometimes, the punishment for arrogance is death.
While it does not necessarily follow that the punishment for collective arrogance is collective death, there is always that risk. Intelligence is an adaptation. But any feature of an organism can become maladaptive, when the environment changes. It is merely ironic that the changes to our environment are themselves a consequence of our own cleverness. We are building a world unsuitable for our own habitation, even as we devote so much energy to trying to make it more comfortable.