Today I want to talk about a possible emerging successor to net neutrality, which I call charisma neutrality, which I think is a plausible consequence of a very likely technological future: pervasive end-to-end encryption. (17 minutes)
Now net neutrality of course, was part of a very important chapter in the history of technology. Though the principle is now pretty much down for the count, for a few decades it played a hugely important role in ensuring that the internet was born more open than closed, and more generative than sterile.
Even though the principle was never quite as perfectly implemented as some people imagine, even when there was a strong consensus around it, it did produce enough of a systemic disposition towards openness that you could treat it as more true than not.
That era has mostly ended, despite ideological resistance, because even though it is a solid idea with respect to human speech, it is not actually such a great idea relative to the technical needs of different kinds of information flow. So as information attributes — stuff like text versus video, and real-time versus non-real-time — began to get more varied, the cost of maintaining net neutrality in the classic sense became a limiting factor.
And at least some technologists began seeing the writing on the wall: the cost of net neutrality was only going to get worse with AI, crypto, the internet of things, VR and AR.
What was good for openness and growth in the 1980s and 90s was turning into a significant drag factor by the aughts and 10s.
What was good for growing from 2 networked computers to a several billion was going to be a real drag going from billions to trillions.
I think there’s no going back here, though internet reactionaries will try.
To understand why this happened, you have to peek under the hood of net neutrality a bit, and understand something called the end-to-end principle, which is an architecture principle that basically says all the smarts in a network should be in the end point nodes which produce and consume information, and the pipes between the nodes should be dumb. Specifically, they should be too dumb to understand what’s flowing through them, even if they can see it, and therefore incapable of behaving differently based on such understanding. Like a bus driver with face-blindness who can’t tell different people apart, only check their tickets.
Now, for certain regimes of network operation and growth, the end-to-end principle is very conducive to openness and growth. But ultimately it’s an engineering idea, not divine gospel, and it has limits, beyond which it turns into a liability that does not actually address the original concerns.
To see why, we need to dig one level deeper.
The end-to-end principle is an example of what in engineering is usually called a separation principle. It is a simplifying principle that limits the space of design possibilities to ones where two things are separate. Another example is the idea that content and presentation must be separated in web documents. Or that the editorial and advertising sides of newspapers should be separate. Both of these again got stressed and broken in the last decade.
Separation principles usually end up this way, because there’s more ways for things to be tangled and coupled together than there are for them to be separate. So it’s sort of inevitable that they’ll break down, by the law of entropy. Walls tend to leak or break down. It’s sort of a law of nature.
Whether you’re talking about walls between countries or between parts of an architecture, separation principles represent a kind of reductive engineering idealism to keep complexity in check. There’s no point in mourning the death of one separation principle or the other. The trick is to accept when the principle has done its job for a period of technological evolu