APIs Are Ontological Boundaries

Why are APIs complicated? In my theory, it has nothing to do with technology. APIs are literally translating realities, and that’s not going to get easy any time soon.

What Is An API?

In its literal sense, Application Programming Interface means 1 “be able to use someone else’s code reliably.” This is done by encapsulation and information hiding: whatever is lurking beneath API is none of my concern as a caller; I am only interested in exposed interface and data.

APIs are encapsulated like an onion. In the context of calling http myapi.example.com/myresource, the Linux kernel will not be considered an API, but it may very well be for the /myresource implementation.

For this article, let me define API in a more specific way.

APIs define boundaries between systems.

What Is A System?

The system is a model of reality. Model means “simplification”. If you haven’t read or listened to Allison Parrish’s excellent Programming is Forgetting, stop and do that now.

One of her great points2 is this (paraphrased): What you omit is as important as what you leave in. A model of a person is different for many systems. For blog, your handle may be enough. Not so much for a social network and even less so for a medical system.

This is not only true for the data mode, but also for data flow and processes around it. Who can view the data and who can change them? How are they shared? Again, a world of difference between a “a person model” on a blog and in medicine.

In addition there is a feedback loop between the system model and reality. Implementation details in a government system for creating companies influence how—and if—people start it.

Systems thus encapsulate their own reality in a literal and ontological meaning in both philosophical and computer sciencesense. The system’s model represents reality for all agents within its boundary.

APIs Are Collisions

Connecting systems through APIs hence represent a collision—and often a conflict—between realities. APIs are complicated because, for every API use, there has to be a conflict resolution for the systems' model collision. But here’s the thing: This is not a bug, it’s a feature.

The naïve approach taken by many “holistic systems architects” is an attempt to unify the models between systems as much as possible. I believe this is not only difficult but actively harmful. The broader the imposed context, the wider the harm.

For a good example of model unification solidifies biases of the original models, look at how schema.org defines a person. It may not be a bad starting point for a “guess who the author is.” Structuring a name as “givenName” and “familyName” says something about the culture. Picking „taxId", „vatId", „netWorth", „hasPOS" and „funder" as core attributes says something about the system’s intended usage 3.

Every time you adopt someone else’s model wholesale, you are embracing their reality without knowing what they have decided to omit, whether intentionally or not. Model embracement makes sense if the other model is a subset of yours (e.g., when building a Linux daemon application, it makes sense to adopt the Linux process model), but it is dangerous when it’s not (e.g., integrating permission models).

The Lure Of The Central Data Model

The core issue of designing APIs is to decide where the system boundaries are. Service systems implicitly align system boundaries with network boundaries. The data model is consistent within the service, usually because it uses a single database. Should consuming services share their data model with the original caller or should they have their own?

Deciding this is one of the most consequential decisions for system architecture. It should be done explicitly and thoughtfully.

Naïve opting for complete consistency has some benefits. It makes sense to have a central repository of all models in the form of data structures. From those model definitions, one can generate a language-specific representation of those messages and adopt them as a service model. This is what protobuf/gRPCcombination goes for, as well as Avro and Thrift.

However, complete model consistency is another way of saying “domain coupling”. Evolution becomes significantly harder 4 as the system scales. I often saw it as a laudable goal by architects in theory but worked around by service developers in practice. Whether it outweighs the benefits of not having to write serialization code should be a conscious decision.

Minimum Viable Data

While I understand the benefits of a consistent model, I prefer to have it broken down in well-defined areas 5 and on protocol level only. I think there are significant benefits in an explicit translation layer between internal service data representation and protocol or storage data representation 6. That is used as a contract with other services.

The service should expose only data that is requested by someone. Preemptive disclosure is expanding the contract footprint, complicating future evolution. The specification of the exposed data is very useful and should be done in the appropriate description format. It should be published in the machine-readable form together with the interface 7

Clients should have an explicit domain translation layer. This covers both data deserialization and translation between the provider system data model and client’s data model. Be mindful of independent co-evolution: defensively inspect the attributes you need, ignore the rest.

Protocols

I have talked about data as they define the boundaries from the domain perspective. There is another boundary to be considered: control boundary. Can you mandate or affect how a client is developed? Domain boundaries often start aligned with control boundaries, but deviate as they grow.

I consider some protocols better for communication across the boundary and some better within. “HTTP APIs” 8, Web-based and REST constraints demonstrated properties helpful for such cross-domain contracts.

GraphQL, AsyncAPI or gRPC brought excellent tooling for developer productivity that work great within the control boundary.

Conclusions

  • Acknowledge your model is tied to your system
  • Declare explicit system boundaries
  • Embrace domain translation layers

Acknowledgments

The Programming Is Forgetting talk expanded my understanding of how systems contain biases. Trying to embrace RDF and JSON-LD showcased to me how the existing models define available reality. The GraphQL protocol forced me to articulate problems with central data models. It also showcases the benefits of tooling based on an explicit data specification. And of course, Domain-Driven Design is where I first explored the relationship between a technical implementation, a system, and a human language.

Thanks to Jakub Roztočil for feedback and corrections.


  1. My reinterpretation of Wikipedia’s definition and Joshua Bloch’s talk. Bear with me. ↩︎

  2. Besides talking about my favorite example of how Western-centric the Internet is. ↩︎

  3. This is a pattern I see with the currently popular “SQL over the wire” format, GraphQL. Stored data is serialized into objects using ORM, and the whole object is exposed to the query protocol. This coupling doesn’t end well. ↩︎

  4. Like even harder than under normal, challenging conditions. This is where people at Google go and write projects for protobuf-to-protobuf transpiling↩︎

  5. Bounded contexts in DDD lingo ↩︎

  6. For a long time, I couldn’t make up my mind whether I prefer tight coupling between application objects and their database store (as in Rails or Django) or explicit separation of data store objects and application objects (as in n-tier applications). This provides an answer for me: it makes sense to decouple if the database is used by other services. This is now a less common use-case as network APIs have sufficiently low latency, and thus it’s feasible to use them to abstract the data store away. ↩︎

  7. For HATEOAS architecture, they should be part of the message rather than out-of-band. Unfortunately, I think that for most interfaces, the benefits of “no out-of-band information” doesn’t outweigh the downsides of inflated message size. ↩︎

  8. A loosely-defined term to distinguish from full REST APIs. I’d consider JSON:API a great example. ↩︎

Published in Essays and tagged


All texts written by . I'd love to hear your feedback. If you've liked this, you may want to subscribe for my monthly newsletter, RSS , or Mastodon. You can always return to home page or read about the site and its privacy handling.