How I think about interoperability at scale
Interoperability sounds like a technical problem.
Make systems talk. Align schemas. Expose APIs.
At small scale, that framing works. At larger scale, it breaks down.
Interoperability at scale is less about connectivity and more about coordination.
The first shift in thinking is recognising that not all systems should integrate deeply.
Tight coupling increases speed early on, one team can directly read another's database, systems share state, changes propagate automatically.
But it raises the cost of change exponentially. At scale, you need permission to evolve. Interoperability needs to be selective, optimising for resilience and independence, not just convenience.
The second shift is around contracts.
Stable contracts matter more than flexible ones. When many teams depend on the same interfaces, predictability beats expressiveness.
A contract that changes monthly creates coordination overhead across every consumer. A contract that changes yearly, with clear versioning and deprecation, lets teams build with confidence.
Clear boundaries reduce the need for constant coordination.
I worked in an organisation where 30+ services all integrated with a "flexible" user service. It returned different fields based on which team asked. Each team got exactly what they needed.
When we tried to add a new field, we had to coordinate with 12 teams to understand if the change would break them. A simple schema update took months.
A stable contract, "here's what we return, period", would have been less flexible but infinitely more scalable. Flexibility at scale becomes fragility.
Another key consideration is ownership.
Interoperability fails when it sits between teams with misaligned incentives. One team optimises for their velocity, another for stability, the integration becomes a battleground.
Someone needs to own the relationship between systems, not just the code, and be accountable for how changes propagate, how failures are handled, and how evolution happens.
I also think about interoperability in terms of failure modes.
What happens when one system is slow, unavailable, or wrong?
At scale, those scenarios aren't edge cases, they're inevitable. Every dependency is a potential point of failure.
Designing for graceful degradation, cached fallbacks, circuit breakers, clear timeout strategies, matters more than perfect integration. The question isn't if it will fail, but how it fails.
Finally, I try to be honest about cost.
Interoperability creates long-term obligations. Every connection adds coordination overhead, testing complexity, and deployment dependencies.
At scale, fewer, better-defined integrations usually outperform many shallow ones. Five deep, stable integrations scale better than fifty fragile ones.
For me, interoperability isn't about making everything work together.
It's about making the system evolvable over time, where changes don't require permission from everyone, where failures are isolated, where teams can move independently.
Loose coupling scales. Clever integration doesn't.