On Computable Ethics

The Internet is the largest example of an economy of scale. Under the rules of capitalism, a SaaS product require an enormous upfront investment1 and a near-zero per-user cost2. As a consequence, we don’t have a global Internet now; we have a globally-available US Internet. Even if you don’t live on the English-speaking Internet, chances are that the majority of the Internet you use is powered by US companies.

This is even more outsized on the infrastructure level of programming languages, operating systems, open-source libraries, or cloud providers.

Even though those companies operate globally, their culture and set of ethical values are firmly rooted in the US and that provide the anchor of what is considered “good”. Clashes with a different definition of “good” in other countries underpin most of the controversies in the past decades.

I think solving this is the defining challenge of our time. While on social networks, unsolved ethics enforcement can lead to genocides, we are about to enter much trickier waters with autonomous cars, alternative global currencies, or various attempts at autonomous human organizations.

The Ethics Algorithm

There is the same problem at the core: expressing morality and ethics for human interaction as an algorithm. I’d argue this is not a new problem: at a less precise level, it is what moral philosophy has been trying to get a grasp of for millennia.

Whenever a new Big Tech scandal is discovered, there are calls for more philosophy and ethics courses at computer science degrees as a magic wand solution. While I agree they are interesting, I believe they’d do exactly nothing to solve the problem:

  • The decision-makers for those tricky problems are the business people and the product designers, not the software engineers
  • The consequence would still be an export of your own culture globally
  • For an engineer having to express ethics in the code, there is no correct solution

I believe the closest we are with “an ethics algorithm” is using the rule consequentialism, and well, there are gaps, to put it mildly3. Moreover, the systems algorithm has to not only solve just the ethical rules, but also personal preferences along with the scales similar to Inglehart-Welzel. But even if those would be solved, I think such a system would still be wildly rejected.

Imposing Good

System designers of any HCI system are the judges who decide the results of any ethical and moral conflicts for all its users.

This is not inherently new either: so are the owners of companies who have the ultimate powers over internal rules that shape the company culture. However, the freedom of the “company design” is way more limited through laws and regulations. We are only slowly catching up to agree that some bad ideas4 shouldn’t roam around freely and we are nowhere near tackling the tricky ones5.

The second difference is the amount of choice. As society becomes digitalized, there is arguably more freedom in choosing the company you work for than it is in selecting the systems you use.6 People have therefore less choice in what moral framework they participate in and they rightfully feel like “good” is being imposed on them. Whether to impose superior moral values on others is a dilemma in itself7, but regardless of the outcome, I think there is a sufficient historical track record of explosive rejection when people not sharing it are locked out of participation in society.

No Easy Way Out

How to solve this? I don’t know. But I think it’s the largest argument against the global systems that I’ve heard.

Decentralization is the name of the game for a lot of people, but I am yet to see it work. The way to make decentralized systems work together is through federation, but I haven’t observed many federated systems that don’t centralize over time into monopoly or oligopoly. Sharing a node in the federated system allows for the economies of scale mentioned in the beginning and once the node serves a sufficient amount of users in the system, it starts to dictate the federation rules and is often incentivized to stop federating. Not having network effects would help, but that seems to be hard in an inherently networked system.

Moreover, it seems like ethical decisions are rarely made consciously. They happen as a side effect of optimizing on true values, like growth or profit. And those who make conscious decisions to limit growth or profit or ethics are outcompeted. And because the Internet’s network effects and economy of scale favor having one true winner, those are outcompeted out of existence.

A good start would be to talk about this explicitly. There are some conversations, but they seem to be limited in scope. Regulatory conversations seem to happen through a lens of national interests8. Global conversations seem to happen more on an infrastructure level, and they seem to be biased towards the West as well, although this may be my bias of being primarily on the Western Internet.

The conversations I know of are listed below; if you know about more, please let me know and I’ll add them.

Further Reading

The name of the article is a hat tip to Stephen Wolfram’s idea of computable knowledge. Some of the existential angst is a hat tip to Scott Alexander’s Meditations on Moloch.


  1. Sure: the whole point of a startup economy is that you can start a product with a very limited capital cost. But to get to any non-trivial scale and usage, there needs to be an enormous cumulative investment of time of a highly skilled and well-paid group of engineers. Whether you can bootstrap it or need outside investment is a different discussion. ↩︎

  2. In terms of operations, not necessarily user acquisition. But the point is that per-user cost is lower with larger systems because of efficiencies in computational density, and it helps to amortize the fixed development costs. Again, I am talking about “we are at S&P 500 scale” as yes, costs can be much lower if the audience of your system is your 10 trusted neighbors. ↩︎

  3. Read the linked article with the filter of “this would need to be expressed in the most fine-grained detail as a computer code” ↩︎

  4. Like “how about we try what we have learned about addiction in gambling and apply it globally to create addictive usage patterns and money exploitation?” ↩︎

  5. Like “at which point should the propagation of a meme through a networked human mind be limited or stopped”? ↩︎

  6. This is of course not universally true across the socio-economical spectrum as well as globally. But please stay with me for the sake of argument as I believe that with the world globally becoming wealthier and safer on both average and median, this is becoming true for more and more people. ↩︎

  7. I hear a lot of people relativizing this and saying “there are no superior values, everything is relative to your culture”, but I disagree. As a baseline, we globally agreed that genocide or child rape is bad and we have put institutions in place that have the power to violate national sovereignty to protect people from those. How well they work is a different question, but the point is that the agreement is there and it is imposing a moral value we consider superior. Notice that when the violators defend themselves, it’s the “this case is not a genocide and/or it was deserved because we were provoked or the others should not be considered human”, not the “genocides are fine and everybody should be allowed to do them”. ↩︎

  8. The situation worsens with exported systems of vested national interests, i.e. weapon-grade spying systems, autonomous drones, or “authoritarian regime as a software service” ↩︎

Published in Essays and tagged


All texts written by . I'd love to hear your feedback. If you've liked this, you may want to subscribe for my monthly newsletter, RSS , or Mastodon. You can always return to home page or read about the site and its privacy handling.