Philosophy of Observation
Information Networks & Truth
Why the architecture of information systems determines whether they produce understanding or delusion
The structure of an information network — not just the data flowing through it — determines whether that network produces truth or delusion. A network with self-correction mechanisms, error detection, and distributed verification tends toward truth. A network optimized for speed, engagement, or institutional convenience tends toward whatever narrative serves its operators. This principle, drawn from Yuval Noah Harari's Nexus, is foundational to how M33 designs its data architecture: provenance is not a feature but a structural requirement for any system that claims to represent reality.
Why It Matters
There is a quiet assumption in earth observation that more data equals more truth. Better sensors, higher resolution, faster revisit times (temporal resolution).
All of these are treated as progress toward understanding. And they are, in a mechanical sense. A 10-meter pixel reveals more than a 30-meter pixel.
But resolution is not truth. A high-resolution image that has been miscalibrated, processed through a pipeline with no quality checks, stripped of its metadata, and delivered without any indication of its confidence level is not more true than a lower-resolution image with full provenance. It is more detailed, but detail and truth are not the same thing.
The question is not just what does the data say? but what kind of information architecture makes it possible to trust what the data says?
This is Harari's central insight in Nexus: that throughout human history, the critical variable has not been the quantity of information available but the structure of the networks that process it. Bureaucracies, scientific communities, religious institutions, social media platforms — each is an information network with specific architectural properties that determine whether it converges toward truth or drifts toward delusion.
The same applies to earth observation systems. A data pipeline that optimizes for speed of delivery over confidence scoring will eventually deliver convincing falsehoods. A system that strips metadata to reduce file sizes will eventually lose track of what its data actually represents. An architecture that treats provenance as optional will eventually be unable to distinguish between verified observation and generated artifact.