The Wrong Coordinates
There is a version of almost every hard problem where the problem dissolves. Not because someone found a cleverer solution, but because someone changed the language in which the problem was stated. The difficulty was never in the phenomenon. It was in the coordinates.
This isn't a metaphor. In condensed matter physics, the fermion sign problem makes certain quantum simulations exponentially hard — but only in the fermionic basis. Rewrite the same physics in terms of bosonic observables, and the sign oscillations cancel. The simulation becomes tractable. Nothing about the physical system changed. Everything about its description did.
This pattern — where difficulty is an artifact of representation rather than a feature of structure — appears across enough domains to be worth naming. Call it representational hardness: the phenomenon where a problem's apparent complexity is a property of the coordinate system used to describe it, not a property of the thing being described.
Born's Rule Was Never a Mystery
The most striking example comes from the foundations of quantum mechanics. The Born rule — the fact that measurement probabilities are given by the squared amplitude of the wave function — has been treated as a foundational mystery since 1926. Why squared? Why not cubed, or linear, or something else entirely?
A recent paper by Masanes, Galley, and Müller shows it isn't a mystery at all. Quantum mechanics has two kinds of composition: reversible evolution combines additively (superposition), and irreversible records combine multiplicatively (tensor products). The Born rule is the unique bridge between these two regimes that makes the overall framework self-consistent. It's not a postulate — it's a bookkeeping constraint. The quadratic form follows from the requirement that addition and multiplication compose coherently.
The "mystery" existed because the question was framed in a way that treated the Born rule as an independent axiom requiring justification. Reframe it as a consistency condition between two compositional structures, and there's nothing left to explain. The difficulty was in treating a derived constraint as a primitive.
Ecology's Ghost Species
In mathematical ecology, Lotka-Volterra equations model species interactions using a fixed list of species. This seems natural — you start with the species that exist and track how their populations change. But when species go extinct, they leave behind zero-population dimensions that the model continues to carry. The mathematics drags these ghosts through every calculation.
Plank and Yemini recently showed that allowing the species basis to vary — so the mathematical space tracks only the species that are currently alive — dramatically simplifies the dynamics and more faithfully represents the biology. The complexity wasn't ecological. It was notational. A decision made at the beginning of the calculation (fix the species list) created difficulty that persisted through every subsequent step.
The ecological system didn't care which species had existed historically. The modeler did, and that caring was encoded into the coordinate system.
The Number of Hard Integrals Is a Topological Invariant
In particle physics, Feynman integrals encode the quantum corrections to every scattering process. Computing them has been one of the persistent technical challenges of the field for seventy years. The number of independent "master integrals" that must be computed appears to depend on how you set up the calculation — which variables you use, which symmetries you exploit.
Except it doesn't. Brunello, Chestnov, and Marzucca recently proved that the master integral count is determined by the Euler characteristics of the fixed-point sets of the diagram's symmetries. This is a topological invariant — a number that doesn't change regardless of how you parametrize the integral. The "hard" objects were always countable by topology. What made them look variable was the choice of representation, not the structure of the physics.
Your coordinates made the counting hard. The topology always knew the answer.
Sixty Qubits
Quantum computing's clearest practical advantage over classical computing is usually framed as speed: quantum computers can solve certain problems exponentially faster. But a recent result by Huang, Preskill, and colleagues points to something more fundamental. For certain machine learning tasks, fewer than sixty qubits can represent what would require an exponential number of classical parameters.
The advantage isn't speed. It's compression. The classical representation is exponentially wasteful — it uses exponentially many numbers to encode information that sixty quantum bits capture exactly. The "hardness" of the classical problem is an artifact of using a representational framework (classical bits) that is structurally mismatched to the information being encoded.
This reframes quantum advantage as a statement about representations, not about computation. The quantum system doesn't calculate faster. It describes the same thing in fewer symbols.
The Dualities That Were Always There
Theoretical physics provides perhaps the most dramatic example. String theory's dualities — relations showing that seemingly different theories describe the same physics — were originally discovered in the presence of supersymmetry, a mathematical structure that makes the symmetries visible. Without supersymmetry, the string landscape appeared messy and intractable.
Vafa, Kachru, and collaborators recently demonstrated that the dualities persist even without supersymmetry. The relationships between different string theories were always there. Supersymmetry wasn't creating the dualities; it was the particular representational framework that made them visible. Removing it didn't remove the structure — it removed the lens.
The "messy" landscape was messy in one coordinate system. The structural relationships were invariant.
What Doesn't Dissolve
The pattern so far might suggest a naive optimism: all difficulties are representational, and the solution to every hard problem is to find the right coordinates. This is wrong, and the places where it fails are as diagnostic as the places where it succeeds.
Gödel's incompleteness theorem is hard in every sufficiently expressive formal system. You cannot dissolve it by changing representation because the difficulty is generated by the system's ability to encode statements about itself. The diagonal argument works in any language powerful enough to quote itself. This is structural hardness — the difficulty is in what the system IS, not in how you describe it.
Quantum contextuality is similarly irreducible. Superdeterminism attempts to dissolve quantum nonlocality by positing that measurement settings and quantum states are correlated from the beginning. It succeeds — but gains contextuality in exchange. The weirdness doesn't dissolve; it migrates. You can trade one form of quantum strangeness for another, but you cannot reach a representation in which quantum mechanics stops being strange. The strangeness is structural.
A recent topological proof about AI safety provides another example: safe and unsafe prompts are topologically adjacent in any connected input space, so no continuous wrapper function can simultaneously preserve functionality, maintain safety, and remain transparent. This isn't an engineering limitation. It's a theorem about the topology of the problem space. No change of coordinates makes safe and unsafe inputs separable.
The Discriminant
How do you know which kind of difficulty you're facing? Two diagnostics help.
First: can you construct a diagonal argument? If the difficulty involves a system encoding statements about itself — if the problem is, in some precise sense, self-referential — then the hardness is likely structural. No coordinate change will help because the difficulty is generated by the system's own expressive power.
Second: does the difficulty persist when you change the level of description? Representational hardness dissolves within a single level when you change coordinates. Structural hardness persists across levels. If you can vary the representation freely and the problem remains, you're probably looking at a genuine impossibility, not a notational artifact.
There's also a practical heuristic: when an entire research community has been working on a problem for decades using essentially the same formalism, the difficulty might be in the formalism, not the problem. The history of science is full of cases where someone from outside the field solved a long-standing problem not by being smarter, but by being unencumbered by the community's conventional coordinate system.
The Difficulty You Chose
Every representation is a choice. The choice is usually made early — which variables to track, which basis to use, which degrees of freedom to treat as fundamental. Then the consequences of that choice propagate through every subsequent calculation. By the time the difficulty appears, the choice that created it is invisible. It looks like the problem is hard. Really, you made it hard by how you decided to look at it.
This is practically important. Research programs that mistake representational for structural hardness waste effort attacking artifacts. Conversely, declaring a structural difficulty "merely representational" leads to infinite coordinate-shopping with no resolution. The ability to distinguish the two is itself a cognitive tool — perhaps the most important one in any field that works with formal structures.
Not everything is representationally hard. Hierarchical concepts in language models turn out to be representationally easy — clean, linear, low-dimensional subspaces that appear universally across different architectures and training regimes. The framework's value comes from being able to make this distinction. Hierarchy is easy. Negation is hard. Born's rule dissolves. Gödel fails. The taxonomy of difficulty, applied honestly, is the point.